Embedding

class nvtripy.Embedding(num_embeddings: int, embedding_dim: int, dtype: dtype = float32)[source]

Bases: Module

A lookup table for embedding vectors of a fixed size. Embedding vectors can be retrieved by their indices.

Parameters:
  • num_embeddings (int) – Number of embedding vectors in the lookup table.

  • embedding_dim (int) – Size of each embedding vector in the lookup table.

  • dtype (dtype) – The data type to use for the weight parameter.

Example
1embedding = tp.Embedding(num_embeddings=4, embedding_dim=6)
2
3embedding.weight = tp.iota(embedding.weight.shape)
4
5input = tp.Tensor([0, 2], dtype=tp.int32)
6output = embedding(input)
Local Variables
>>> embedding
Embedding(
    weight: Parameter = (shape=(4, 6), dtype=float32),
)
>>> embedding.state_dict()
{
    weight: tensor(
        [[0, 0, 0, 0, 0, 0],
         [1, 1, 1, 1, 1, 1],
         [2, 2, 2, 2, 2, 2],
         [3, 3, 3, 3, 3, 3]], 
        dtype=float32, loc=gpu:0, shape=(4, 6)),
}

>>> input
tensor([0, 2], dtype=int32, loc=cpu:0, shape=(2,))

>>> output
tensor(
    [[0, 0, 0, 0, 0, 0],
     [2, 2, 2, 2, 2, 2]], 
    dtype=float32, loc=gpu:0, shape=(2, 6))
__call__(*args: Any, **kwargs: Any) Any

Calls the module with the specified arguments.

Parameters:
  • *args (Any) – Positional arguments to the module.

  • **kwargs (Any) – Keyword arguments to the module.

Returns:

The outputs computed by the module.

Return type:

Any

Example
1class Module(tp.Module):
2    def forward(self, x):
3        return tp.relu(x)
4
5
6module = Module()
7
8input = tp.arange(-3, 3)
9out = module(input)  # Note that we do not call `forward` directly.
Local Variables
>>> module
Module(
)
>>> module.state_dict()
{}

>>> input
tensor([-3, -2, -1, 0, 1, 2], dtype=float32, loc=gpu:0, shape=(6,))

>>> out
tensor([0, 0, 0, 0, 1, 2], dtype=float32, loc=gpu:0, shape=(6,))
load_state_dict(state_dict: Dict[str, Tensor], strict: bool = True) Tuple[Set[str], Set[str]]

Loads parameters from the provided state_dict into the current module. This will recurse over any nested child modules.

Parameters:
  • state_dict (Dict[str, Tensor]) – A dictionary mapping names to parameters.

  • strict (bool) – If True, keys in state_dict must exactly match those in this module. If not, an error will be raised.

Returns:

  • missing_keys: keys that are expected by this module but not provided in state_dict.

  • unexpected_keys: keys that are not expected by this module but provided in state_dict.

Return type:

A tuple of two sets of strings representing

Example
 1class MyModule(tp.Module):
 2    def __init__(self):
 3        super().__init__()
 4        self.param = tp.ones((2,), dtype=tp.float32)
 5
 6
 7module = MyModule()
 8
 9print(f"Before: {module.param}")
10
11module.load_state_dict({"param": tp.zeros((2,), dtype=tp.float32)})
12
13print(f"After: {module.param}")
Output
Before: tensor([1, 1], dtype=float32, loc=gpu:0, shape=(2,))
After: tensor([0, 0], dtype=float32, loc=gpu:0, shape=(2,))

See also

state_dict()

named_children() Iterator[Tuple[str, Module]]

Returns an iterator over immediate children of this module, yielding tuples containing the name of the child module and the child module itself.

Returns:

An iterator over tuples containing the name of the child module and the child module itself.

Return type:

Iterator[Tuple[str, Module]]

Example
 1class StackedLinear(tp.Module):
 2    def __init__(self):
 3        super().__init__()
 4        self.linear1 = tp.Linear(2, 2)
 5        self.linear2 = tp.Linear(2, 2)
 6
 7
 8stacked_linear = StackedLinear()
 9
10for name, module in stacked_linear.named_children():
11    print(f"{name}: {type(module).__name__}")
Output
linear1: Linear
linear2: Linear
named_parameters() Iterator[Tuple[str, Tensor]]
Returns:

An iterator over tuples containing the name of a parameter and the parameter itself.

Return type:

Iterator[Tuple[str, Tensor]]

Example
 1class MyModule(tp.Module):
 2    def __init__(self):
 3        super().__init__()
 4        self.alpha = tp.Tensor(1)
 5        self.beta = tp.Tensor(2)
 6
 7
 8linear = MyModule()
 9
10for name, parameter in linear.named_parameters():
11    print(f"{name}: {parameter}")
Output
alpha: tensor(1, dtype=int32, loc=cpu:0, shape=())
beta: tensor(2, dtype=int32, loc=cpu:0, shape=())
state_dict() Dict[str, Tensor]

Returns a dictionary mapping names to parameters in the module. This will recurse over any nested child modules.

Returns:

A dictionary mapping names to parameters.

Return type:

Dict[str, Tensor]

Example
 1class MyModule(tp.Module):
 2    def __init__(self):
 3        super().__init__()
 4        self.param = tp.ones((2,), dtype=tp.float32)
 5        self.linear1 = tp.Linear(2, 2)
 6        self.linear2 = tp.Linear(2, 2)
 7
 8
 9module = MyModule()
10
11state_dict = module.state_dict()
Local Variables
>>> state_dict
{
    param: tensor([1, 1], dtype=float32, loc=gpu:0, shape=(2,)),
    linear1.weight: <nvtripy.frontend.module.parameter.DefaultParameter object at 0x79774c352e50>,
    linear1.bias: <nvtripy.frontend.module.parameter.DefaultParameter object at 0x79774c355cd0>,
    linear2.weight: <nvtripy.frontend.module.parameter.DefaultParameter object at 0x79774c35d700>,
    linear2.bias: <nvtripy.frontend.module.parameter.DefaultParameter object at 0x79774c35d820>,
}
dtype: dtype

The data type used to perform the operation

weight: Tensor

The embedding lookup table of shape \([\text{num_embeddings}, \text{embedding_dim}]\).

forward(x: Tensor) Tensor[source]
Parameters:

x (Tensor) – A tensor of shape \([N]\) containing the indices of the desired embedding vectors.

Returns:

A tensor of shape \([N, \text{embedding_dim}]\) containing the embedding vectors.

Return type:

Tensor