Embedding

class nvtripy.Embedding(num_embeddings: int, embedding_dim: int, dtype: dtype = float32)[source]

Bases: Module

A lookup table for embedding vectors of a fixed size. Embedding vectors can be retrieved by their indices.

Parameters:
  • num_embeddings (int) – Number of embedding vectors in the lookup table.

  • embedding_dim (int) – Size of each embedding vector in the lookup table.

  • dtype (dtype) – The data type to use for the weight parameter.

Example
1embedding = tp.Embedding(num_embeddings=4, embedding_dim=6)
2
3input = tp.Tensor([0, 2], dtype=tp.int32)
4output = embedding(input)
Local Variables
>>> embedding
Embedding(
    weight: Parameter = (shape=[4, 6], dtype=float32),
)
>>> embedding.state_dict()
{
    weight: tensor(
        [[0.0000, 1.0000, 2.0000, 3.0000, 4.0000, 5.0000],
         [6.0000, 7.0000, 8.0000, 9.0000, 10.0000, 11.0000],
         [12.0000, 13.0000, 14.0000, 15.0000, 16.0000, 17.0000],
         [18.0000, 19.0000, 20.0000, 21.0000, 22.0000, 23.0000]], 
        dtype=float32, loc=gpu:0, shape=(4, 6)),
}

>>> input
tensor([0, 2], dtype=int32, loc=gpu:0, shape=(2,))

>>> output
tensor(
    [[0.0000, 1.0000, 2.0000, 3.0000, 4.0000, 5.0000],
     [12.0000, 13.0000, 14.0000, 15.0000, 16.0000, 17.0000]], 
    dtype=float32, loc=gpu:0, shape=(2, 6))
load_state_dict(state_dict: Dict[str, Tensor], strict: bool = True) Tuple[Set[str], Set[str]]

Loads parameters from the provided state_dict into the current module. This will recurse over any nested child modules.

Parameters:
  • state_dict (Dict[str, Tensor]) – A dictionary mapping names to parameters.

  • strict (bool) – If True, keys in state_dict must exactly match those in this module. If not, an error will be raised.

Returns:

  • missing_keys: keys that are expected by this module but not provided in state_dict.

  • unexpected_keys: keys that are not expected by this module but provided in state_dict.

Return type:

A tuple of two sets of strings representing

Example
1# Using the `module` and `state_dict` from the `state_dict()` example:
2print(f"Before: {module.param}")
3
4state_dict["param"] = tp.zeros((2,), dtype=tp.float32)
5module.load_state_dict(state_dict)
6
7print(f"After: {module.param}")
Output
Before: tensor([1.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,))
After: tensor([0.0000, 0.0000], dtype=float32, loc=gpu:0, shape=(2,))

See also

state_dict()

named_children() Iterator[Tuple[str, Module]]

Returns an iterator over immediate children of this module, yielding tuples containing the name of the child module and the child module itself.

Returns:

An iterator over tuples containing the name of the child module and the child module itself.

Return type:

Iterator[Tuple[str, Module]]

Example
 1class StackedLinear(tp.Module):
 2    def __init__(self):
 3        super().__init__()
 4        self.linear1 = tp.Linear(2, 2)
 5        self.linear2 = tp.Linear(2, 2)
 6
 7
 8stacked_linear = StackedLinear()
 9
10for name, module in stacked_linear.named_children():
11    print(f"{name}: {type(module).__name__}")
Output
linear1: Linear
linear2: Linear
named_parameters() Iterator[Tuple[str, Tensor]]
Returns:

An iterator over tuples containing the name of a parameter and the parameter itself.

Return type:

Iterator[Tuple[str, Tensor]]

Example
 1class MyModule(tp.Module):
 2    def __init__(self):
 3        super().__init__()
 4        self.alpha = tp.Tensor(1)
 5        self.beta = tp.Tensor(2)
 6
 7
 8linear = MyModule()
 9
10for name, parameter in linear.named_parameters():
11    print(f"{name}: {parameter}")
Output
alpha: tensor(1, dtype=int32, loc=gpu:0, shape=())
beta: tensor(2, dtype=int32, loc=gpu:0, shape=())
state_dict() Dict[str, Tensor]

Returns a dictionary mapping names to parameters in the module. This will recurse over any nested child modules.

Returns:

A dictionary mapping names to parameters.

Return type:

Dict[str, Tensor]

Example
 1class MyModule(tp.Module):
 2    def __init__(self):
 3        super().__init__()
 4        self.param = tp.ones((2,), dtype=tp.float32)
 5        self.linear1 = tp.Linear(2, 2)
 6        self.linear2 = tp.Linear(2, 2)
 7
 8
 9module = MyModule()
10
11state_dict = module.state_dict()
Local Variables
>>> state_dict
{
    param: tensor([1.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,)),
    linear1.weight: tensor(
        [[0.0000, 1.0000],
         [2.0000, 3.0000]], 
        dtype=float32, loc=gpu:0, shape=(2, 2)),
    linear1.bias: tensor([0.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,)),
    linear2.weight: tensor(
        [[0.0000, 1.0000],
         [2.0000, 3.0000]], 
        dtype=float32, loc=gpu:0, shape=(2, 2)),
    linear2.bias: tensor([0.0000, 1.0000], dtype=float32, loc=gpu:0, shape=(2,)),
}
dtype: dtype

The data type used to perform the operation

weight: Tensor

The embedding lookup table of shape \([\text{num_embeddings}, \text{embedding_dim}]\).

__call__(x: Tensor) Tensor[source]
Parameters:

x (Tensor) – A tensor of shape \([N]\) containing the indices of the desired embedding vectors.

Returns:

A tensor of shape \([N, \text{embedding_dim}]\) containing the embedding vectors.

Return type:

Tensor