BatchNorm¶
- class tripy.BatchNorm(num_features: int, dtype: dtype = float32, eps: float = 1e-05)[source]¶
Bases:
Module
Applies batch normalization over an N-dimensional input tensor using precomputed statistics:
\(y = \frac{x - \mu}{\sqrt{\sigma^2 + \epsilon}} * \gamma + \beta\)
- where:
\(\mu\) is the precomputed running mean.
\(\sigma^2\) is the precomputed running variance.
\(\gamma\) and \(\beta\) are learnable parameter vectors (wieight and bias).
This implementation supports 1D, 2D, and 3D inputs (e.g., time-series, images, and volumetric data). Batch Normalization normalizes across the specified feature dimension (typically the second dimension in the input).
- Parameters:
num_features (int) – The number of feature channels in the input tensor (the size of the second dimension).
dtype (dtype) – The data type to use for the weight, bias, running_mean and running_var parameters.
eps (float) – \(\epsilon\) value added to the denominator to prevent division by zero during normalization.
Example
1batch_norm = tp.BatchNorm(2) 2 3input = tp.iota((1, 2, 1, 1)) 4output = batch_norm(input)
>>> batch_norm.state_dict() { weight: tensor([0.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,)), bias: tensor([0.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,)), running_mean: tensor([0.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,)), running_var: tensor([0.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,)), } >>> input tensor( [[[[0.0000]], [[0.0000]]]], dtype=float32, loc=gpu:0, shape=(1, 2, 1, 1)) >>> output tensor( [[[[0.0000]], [[0.0000]]]], dtype=float32, loc=gpu:0, shape=(1, 2, 1, 1))
- num_features: int¶
The number of feature channels in the input tensor (the size of the second dimension).
- eps: float¶
\(\epsilon\) value added to the denominator to prevent division by zero during normalization.
- running_mean: Parameter¶
The running mean for the feature channels of shape \([\text{num_features}]\).