Recurrent Neural Network Cell (RNNCell)#

API#

class warp_nn.modules.layers.RNNCell(input_size: int, hidden_size: int, *, bias: bool = True)[source]#

Bases: Module

Apply a Elman’s Recurrent Neural Network (RNN) cell.

\[\text{RNNCell}(x, h) = \text{tanh}(W_{ih} \, x + b_{ih} + W_{hh} \, h + b_{hh})\]


Learnable parameters:

Name

Shape

Description

\(W_{ih}\)

weight_ih

(hidden_size, input_size)

Input-to-hidden weights

\(W_{hh}\)

weight_hh

(hidden_size, hidden_size)

Hidden-to-hidden weights

\(b_{ih}\)

bias_ih

(hidden_size, 1)

Input-to-hidden bias. Only if bias is true

\(b_{hh}\)

bias_hh

(hidden_size, 1)

Hidden-to-hidden bias. Only if bias is true

The parameters are initialized from the uniform distribution \(u(-k, k)\) where \(k = \frac{1}{\sqrt{\text{hidden\_size}}}\).


Parameters:
  • input_size – The number of input features.

  • hidden_size – The number of hidden features.

  • bias – Whether to include a bias term.

__call__(
input: array,
hidden: array,
) array[source]#

Forward pass of the module.

Parameters:
  • input – The input array, with shape (batch_size, input_size).

  • hidden – The initial hidden state array, with shape (batch_size, hidden_size).

Returns:

The next hidden state array, with shape (batch_size, hidden_size).