ConvTranspose

class nvtripy.ConvTranspose(in_channels: int, out_channels: int, kernel_dims: Sequence[int], padding: Sequence[Sequence[int]] | None = None, stride: Sequence[int] | None = None, groups: int | None = None, dilation: Sequence[int] | None = None, bias: bool = True, dtype: dtype = float32)[source]

Applies a transposed convolution operation on the input tensor.

Transposed convolution, also known as fractionally-strided convolution or deconvolution, performs a “reverse” of a standard convolution. It upsamples the input to a larger spatial resolution, such that if you were to apply a standard convolution and then a transpose convolution with the same parameters, you would get back the original spatial dimensions.

The transposed convolution operation can be thought of as a regular convolution operation applied to a dilated (i.e. zeros are inserted between the input values) version of the input tensor. The stride parameter controls the dilation factor, and the padding effectively indicates how much to crop from the output.

Note that transposed convolution is not a strict inverse of standard convolution.

Parameters:
  • in_channels (int) – The number of channels in the input tensor.

  • out_channels (int) – The number of channels produced by the convolution.

  • kernel_dims (Sequence[int]) – The spatial shape of the kernel.

  • padding (Sequence[Sequence[int]]) – A sequence of pairs of integers of length \(M\) indicating the implicit zero padding applied along each spatial dimension before and after the dimension respectively, where \(M\) is the number of spatial dimensions, i.e. \(M = \text{rank(input)} - 2\). In particular, \(\text{dilation} \times (\text{kernel_dims}_i - 1) - \text{padding}_i\) will be added to or cropped from the input. This is set so that when this module is initialized with the same parameters as nvtripy.Conv, they are inverses with respect to the input/output shapes. Defaults to all 0.

  • stride (Sequence[int]) – A sequence of length \(M\) indicating the stride of convolution across each spatial dimension, where \(M\) is the number of spatial dimensions, i.e. \(M = \text{rank(input)} - 2\). For transposed convolution, this effectively controls the dilation of the input; for each dimension with value \(x\), \(x-1\) zeros are inserted between input values. Defaults to all 1.

  • groups (int) – The number of groups in a grouped convolution where the input and output channels are divided into groups groups. Each output group is connected only to its corresponding input group through the convolution kernel weights, and the outputs for each group are concatenated to produce the final result. This is in contrast to a standard convolution which has full connectivity between all input and output channels. Grouped convolutions reduce computational cost by a factor of groups and can benefit model parallelism and memory usage. Note that in_channels and out_channels must both be divisible by groups. Defaults to 1 (standard convolution).

  • dilation (Sequence[int]) – A sequence of length \(M\) indicating the number of zeros to insert between kernel weights across each spatial dimension, where \(M\) is the number of spatial dimensions, i.e. \(M = \text{rank(input)} - 2\). This is known as the a trous algorithm and further downsamples the output by increasing the receptive field of the kernel. For each dimension with value \(x\), \(x-1\) zeros are inserted between kernel weights.

  • bias (Tensor | None) – Whether to add a bias term to the output or not. The bias has a shape of \((\text{out_channels},)\).

  • dtype (dtype) – The data type to use for the convolution weights.

Example
1input = tp.reshape(tp.arange(4, dtype=tp.float32), (1, 1, 2, 2))
2upsample = tp.ConvTranspose(
3    1, 1, (3, 3), stride=(2, 2), bias=False, dtype=tp.float32
4)
5output = upsample(input)
Local Variables
>>> input
tensor(
    [[[[0.0000, 1.0000],
       [2.0000, 3.0000]]]], 
    dtype=float32, loc=gpu:0, shape=(1, 1, 2, 2))

>>> upsample
ConvTranspose(
    weight: Parameter = (shape=[1, 1, 3, 3], dtype=float32),
)
>>> upsample.state_dict()
{
    weight: tensor(
        [[[[0.0000, 1.0000, 2.0000],
           [3.0000, 4.0000, 5.0000],
           [6.0000, 7.0000, 8.0000]]]], 
        dtype=float32, loc=gpu:0, shape=(1, 1, 3, 3)),
}

>>> output
tensor(
    [[[[0.0000, 0.0000, 0.0000, 1.0000, 2.0000],
       [0.0000, 0.0000, 3.0000, 4.0000, 5.0000],
       [0.0000, 2.0000, 10.0000, 10.0000, 14.0000],
       [6.0000, 8.0000, 19.0000, 12.0000, 15.0000],
       [12.0000, 14.0000, 34.0000, 21.0000, 24.0000]]]], 
    dtype=float32, loc=gpu:0, shape=(1, 1, 5, 5))
Example: "Inversing" Convolution
 1# This process restores the input spatial dimensions, but not its values
 2input = tp.reshape(tp.arange(16, dtype=tp.float32), (1, 1, 4, 4))
 3downsample = tp.Conv(
 4    1,
 5    1,
 6    (2, 2),
 7    stride=(2, 2),
 8    padding=((1, 1), (1, 1)),
 9    bias=False,
10    dtype=tp.float32,
11)
12upsample = tp.ConvTranspose(
13    1,
14    1,
15    (2, 2),
16    stride=(2, 2),
17    padding=((1, 1), (1, 1)),
18    bias=False,
19    dtype=tp.float32,
20)
21output_down = downsample(input)
22output_up = upsample(output_down)
Local Variables
>>> input
tensor(
    [[[[0.0000, 1.0000, 2.0000, 3.0000],
       [4.0000, 5.0000, 6.0000, 7.0000],
       [8.0000, 9.0000, 10.0000, 11.0000],
       [12.0000, 13.0000, 14.0000, 15.0000]]]], 
    dtype=float32, loc=gpu:0, shape=(1, 1, 4, 4))

>>> downsample
Conv(
    weight: Parameter = (shape=[1, 1, 2, 2], dtype=float32),
)
>>> downsample.state_dict()
{
    weight: tensor(
        [[[[0.0000, 1.0000],
           [2.0000, 3.0000]]]], 
        dtype=float32, loc=gpu:0, shape=(1, 1, 2, 2)),
}

>>> upsample
ConvTranspose(
    weight: Parameter = (shape=[1, 1, 2, 2], dtype=float32),
)
>>> upsample.state_dict()
{
    weight: tensor(
        [[[[0.0000, 1.0000],
           [2.0000, 3.0000]]]], 
        dtype=float32, loc=gpu:0, shape=(1, 1, 2, 2)),
}

>>> output_down
tensor(
    [[[[0.0000, 8.0000, 6.0000],
       [28.0000, 54.0000, 22.0000],
       [12.0000, 14.0000, 0.0000]]]], 
    dtype=float32, loc=gpu:0, shape=(1, 1, 3, 3))

>>> output_up
tensor(
    [[[[0.0000, 16.0000, 24.0000, 12.0000],
       [28.0000, 0.0000, 54.0000, 0.0000],
       [84.0000, 108.0000, 162.0000, 44.0000],
       [12.0000, 0.0000, 14.0000, 0.0000]]]], 
    dtype=float32, loc=gpu:0, shape=(1, 1, 4, 4))
dtype: dtype

The data type to use for the convolution weights.

padding: Sequence[Sequence[int]]

A sequence of pairs of integers of length \(M\) indicating the implicit zero padding applied along each spatial dimension before and after the dimension respectively, where \(M\) is the number of spatial dimensions, i.e. \(M = \text{rank(input)} - 2\). In particular, \(\text{dilation} \times (\text{kernel_dims}_i - 1) - \text{padding}_i\) will be added to or cropped from the input. This is set so that when this module is initialized with the same parameters as nvtripy.Conv, they are inverses with respect to the input/output shapes.

stride: Sequence[int]

A sequence of length \(M\) indicating the stride of convolution across each spatial dimension, where \(M\) is the number of spatial dimensions, i.e. \(M = \text{rank(input)} - 2\). For transposed convolution, this effectively controls the dilation of the input; for each dimension with value \(x\), \(x-1\) zeros are inserted between input values.

groups: int

The number of groups in a grouped convolution where the input and output channels are divided into groups groups. Each output group is connected only to its corresponding input group through the convolution kernel weights, and the outputs for each group are concatenated to produce the final result. This is in contrast to a standard convolution which has full connectivity between all input and output channels. Grouped convolutions reduce computational cost by a factor of groups and can benefit model parallelism and memory usage. Note that in_channels and out_channels must both be divisible by groups.

dilation: Sequence[int]

A sequence of length \(M\) indicating the number of zeros to insert between kernel weights across each spatial dimension, where \(M\) is the number of spatial dimensions, i.e. \(M = \text{rank(input)} - 2\). This is known as the a trous algorithm and further downsamples the output by increasing the receptive field of the kernel. For each dimension with value \(x\), \(x-1\) zeros are inserted between kernel weights.

load_state_dict(state_dict: Dict[str, Tensor], strict: bool = True) Tuple[Set[str], Set[str]]

Loads parameters from the provided state_dict into the current module. This will recurse over any nested child modules.

Parameters:
  • state_dict (Dict[str, Tensor]) – A dictionary mapping names to parameters.

  • strict (bool) – If True, keys in state_dict must exactly match those in this module. If not, an error will be raised.

Returns:

  • missing_keys: keys that are expected by this module but not provided in state_dict.

  • unexpected_keys: keys that are not expected by this module but provided in state_dict.

Return type:

A tuple of two sets of strings representing

Example
1# Using the `module` and `state_dict` from the `state_dict()` example:
2print(f"Before: {module.param}")
3
4state_dict["param"] = tp.zeros((2,), dtype=tp.float32)
5module.load_state_dict(state_dict)
6
7print(f"After: {module.param}")
Output
Before: tensor([1.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,))
After: tensor([0.0000, 0.0000], dtype=float32, loc=gpu:0, shape=(2,))

See also

state_dict()

named_children() Iterator[Tuple[str, Module]]

Returns an iterator over immediate children of this module, yielding tuples containing the name of the child module and the child module itself.

Returns:

An iterator over tuples containing the name of the child module and the child module itself.

Return type:

Iterator[Tuple[str, Module]]

Example
 1class StackedLinear(tp.Module):
 2    def __init__(self):
 3        super().__init__()
 4        self.linear1 = tp.Linear(2, 2)
 5        self.linear2 = tp.Linear(2, 2)
 6
 7
 8stacked_linear = StackedLinear()
 9
10for name, module in stacked_linear.named_children():
11    print(f"{name}: {type(module).__name__}")
Output
linear1: Linear
linear2: Linear
named_parameters() Iterator[Tuple[str, Tensor]]
Returns:

An iterator over tuples containing the name of a parameter and the parameter itself.

Return type:

Iterator[Tuple[str, Tensor]]

Example
 1class MyModule(tp.Module):
 2    def __init__(self):
 3        super().__init__()
 4        self.alpha = tp.Tensor(1)
 5        self.beta = tp.Tensor(2)
 6
 7
 8linear = MyModule()
 9
10for name, parameter in linear.named_parameters():
11    print(f"{name}: {parameter}")
Output
alpha: tensor(1, dtype=int32, loc=gpu:0, shape=())
beta: tensor(2, dtype=int32, loc=gpu:0, shape=())
state_dict() Dict[str, Tensor]

Returns a dictionary mapping names to parameters in the module. This will recurse over any nested child modules.

Returns:

A dictionary mapping names to parameters.

Return type:

Dict[str, Tensor]

Example
 1class MyModule(tp.Module):
 2    def __init__(self):
 3        super().__init__()
 4        self.param = tp.ones((2,), dtype=tp.float32)
 5        self.linear1 = tp.Linear(2, 2)
 6        self.linear2 = tp.Linear(2, 2)
 7
 8
 9module = MyModule()
10
11state_dict = module.state_dict()
Local Variables
>>> state_dict
{
    param: tensor([1.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,)),
    linear1.weight: tensor(
        [[0.0000, 1.0000],
         [2.0000, 3.0000]], 
        dtype=float32, loc=gpu:0, shape=(2, 2)),
    linear1.bias: tensor([0.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,)),
    linear2.weight: tensor(
        [[0.0000, 1.0000],
         [2.0000, 3.0000]], 
        dtype=float32, loc=gpu:0, shape=(2, 2)),
    linear2.bias: tensor([0.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,)),
}
bias: Tensor | None

The bias term to add to the output. The bias has a shape of \((\text{out_channels},)\).

weight: Tensor

The kernel of shape \((\text{in_channels}, \frac{\text{out_channels}}{\text{groups}}, *\text{kernel_dims})\).

__call__(input: Tensor) Tensor[source]
Parameters:

input (Tensor) – The input tensor.

Returns:

A tensor of the same data type as the input with a shape \((N, \text{out_channels}, D_{0_{\text{out}}},\ldots,D_{n_{\text{out}}})\) where \(D_{k_{\text{out}}} = (D_{k_{\text{in}}} - 1) \times \text{stride}_k - \text{padding}_{k_0} - \text{padding}_{k_1} + \text{dilation}_k \times (\text{kernel_dims}_k - 1) + 1\)

Return type:

Tensor