LayerNorm

class nvtripy.LayerNorm(normalized_shape: int | Sequence[int], dtype: dtype = float32, eps: float = 1e-05)[source]

Bases: Module

Applies layer normalization over the input tensor:

\(\text{LayerNorm}(x) = \Large \frac{x - \bar{x}}{ \sqrt{\sigma^2 + \epsilon}} \normalsize * \gamma + \beta\)

where \(\bar{x}\) is the mean and \(\sigma^2\) is the variance.

The mean and standard deviation are calculated over the last \(D\) dimensions, where \(D\) is the dimension of \(\text{normalized_shape}\).

Parameters:
  • normalized_shape (Sequence[int]) – The size of the feature dimension of the input over which normalization is performed. If a single integer is provided, it will be unsqueezed to a 1 dimensional shape.

  • dtype (dtype) – The data type to use for the weight and bias parameters.

  • eps (float) – \(\epsilon\) value to prevent division by zero.

Example
1layer_norm = tp.LayerNorm(3)
2
3input = tp.iota((2, 3), dim=1)
4output = layer_norm(input)
Local Variables
>>> layer_norm
LayerNorm(
    weight: Parameter = (shape=[3], dtype=float32),
    bias: Parameter = (shape=[3], dtype=float32),
)
>>> layer_norm.state_dict()
{
    weight: tensor([0.0000, 1.0000, 2.0000], dtype=float32, loc=gpu:0, shape=(3,)),
    bias: tensor([0.0000, 1.0000, 2.0000], dtype=float32, loc=gpu:0, shape=(3,)),
}

>>> input
tensor(
    [[0.0000, 1.0000, 2.0000],
     [0.0000, 1.0000, 2.0000]], 
    dtype=float32, loc=gpu:0, shape=(2, 3))

>>> output
tensor(
    [[0.0000, 1.0000, 4.4495],
     [0.0000, 1.0000, 4.4495]], 
    dtype=float32, loc=gpu:0, shape=(2, 3))
load_state_dict(state_dict: Dict[str, Tensor], strict: bool = True) Tuple[Set[str], Set[str]]

Loads parameters from the provided state_dict into the current module. This will recurse over any nested child modules.

Parameters:
  • state_dict (Dict[str, Tensor]) – A dictionary mapping names to parameters.

  • strict (bool) – If True, keys in state_dict must exactly match those in this module. If not, an error will be raised.

Returns:

  • missing_keys: keys that are expected by this module but not provided in state_dict.

  • unexpected_keys: keys that are not expected by this module but provided in state_dict.

Return type:

A tuple of two sets of strings representing

Example
1# Using the `module` and `state_dict` from the `state_dict()` example:
2print(f"Before: {module.param}")
3
4state_dict["param"] = tp.zeros((2,), dtype=tp.float32)
5module.load_state_dict(state_dict)
6
7print(f"After: {module.param}")
Output
Before: tensor([1.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,))
After: tensor([0.0000, 0.0000], dtype=float32, loc=gpu:0, shape=(2,))

See also

state_dict()

named_children() Iterator[Tuple[str, Module]]

Returns an iterator over immediate children of this module, yielding tuples containing the name of the child module and the child module itself.

Returns:

An iterator over tuples containing the name of the child module and the child module itself.

Return type:

Iterator[Tuple[str, Module]]

Example
 1class StackedLinear(tp.Module):
 2    def __init__(self):
 3        super().__init__()
 4        self.linear1 = tp.Linear(2, 2)
 5        self.linear2 = tp.Linear(2, 2)
 6
 7
 8stacked_linear = StackedLinear()
 9
10for name, module in stacked_linear.named_children():
11    print(f"{name}: {type(module).__name__}")
Output
linear1: Linear
linear2: Linear
named_parameters() Iterator[Tuple[str, Tensor]]
Returns:

An iterator over tuples containing the name of a parameter and the parameter itself.

Return type:

Iterator[Tuple[str, Tensor]]

Example
 1class MyModule(tp.Module):
 2    def __init__(self):
 3        super().__init__()
 4        self.alpha = tp.Tensor(1)
 5        self.beta = tp.Tensor(2)
 6
 7
 8linear = MyModule()
 9
10for name, parameter in linear.named_parameters():
11    print(f"{name}: {parameter}")
Output
alpha: tensor(1, dtype=int32, loc=gpu:0, shape=())
beta: tensor(2, dtype=int32, loc=gpu:0, shape=())
state_dict() Dict[str, Tensor]

Returns a dictionary mapping names to parameters in the module. This will recurse over any nested child modules.

Returns:

A dictionary mapping names to parameters.

Return type:

Dict[str, Tensor]

Example
 1class MyModule(tp.Module):
 2    def __init__(self):
 3        super().__init__()
 4        self.param = tp.ones((2,), dtype=tp.float32)
 5        self.linear1 = tp.Linear(2, 2)
 6        self.linear2 = tp.Linear(2, 2)
 7
 8
 9module = MyModule()
10
11state_dict = module.state_dict()
Local Variables
>>> state_dict
{
    param: tensor([1.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,)),
    linear1.weight: tensor(
        [[0.0000, 1.0000],
         [2.0000, 3.0000]], 
        dtype=float32, loc=gpu:0, shape=(2, 2)),
    linear1.bias: tensor([0.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,)),
    linear2.weight: tensor(
        [[0.0000, 1.0000],
         [2.0000, 3.0000]], 
        dtype=float32, loc=gpu:0, shape=(2, 2)),
    linear2.bias: tensor([0.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,)),
}
dtype: dtype

The data type used to perform the operation.

normalized_shape: Sequence[int]

Defines the shape of the input tensor that is to be normalized over.

weight: Tensor

The \(\gamma\) parameter of shape \(\text{normalized_shape}\).

bias: Tensor

The \(\beta\) parameter of shape \(\text{normalized_shape}\).

eps: float

A value added to the denominator to prevent division by zero.

__call__(x: Tensor) Tensor[source]
Parameters:

x (Tensor) – The input tensor.

Returns:

A tensor of the same shape as the input.

Return type:

Tensor