LSTM¶
Implementation of a Long-short term memory recurrent neural network based on the work of Hochreiter & Schmidhuber (1997).
For a more in-depth explanation of LSTMs, consult for instance the excellent blog post by Christopher Olah.
LSTM Module Documentation¶
Implement a simple vanilla LSTM.
- class nlp_uncertainty_zoo.models.lstm.CellWiseLSTM(input_size: int, hidden_size: int, dropout: float, cells: List[LSTMCell], device: Union[device, str])¶
Bases:
Module
Model of a LSTM with a custom cell class.
- forward(input_: FloatTensor, hidden: Tuple[FloatTensor, FloatTensor]) Tuple[FloatTensor, Tuple[FloatTensor, FloatTensor]] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶
- class nlp_uncertainty_zoo.models.lstm.LSTM(vocab_size: int, output_size: int, input_size: int = 650, hidden_size: int = 650, num_layers: int = 2, dropout: float = 0.2, init_weight: ~typing.Optional[float] = 0.6, is_sequence_classifier: bool = True, lr: float = 0.5, weight_decay: float = 0.001, optimizer_class: ~typing.Type[~torch.optim.optimizer.Optimizer] = <class 'torch.optim.adam.Adam'>, scheduler_class: ~typing.Optional[~typing.Type[~torch.optim.lr_scheduler._LRScheduler]] = None, scheduler_kwargs: ~typing.Optional[~typing.Dict[str, ~typing.Any]] = None, model_dir: ~typing.Optional[str] = None, device: ~typing.Union[~torch.device, str] = 'cpu', **model_params)¶
Bases:
Model
- predict(X: Tensor, *args, **kwargs) Tensor ¶
Make a prediction for some input.
- Parameters:
- X: torch.Tensor
Input data points.
- Returns:
- torch.Tensor
Predictions.
- class nlp_uncertainty_zoo.models.lstm.LSTMModule(vocab_size: int, output_size: int, input_size: int, hidden_size: int, num_layers: int, dropout: float, is_sequence_classifier: bool, device: Union[device, str], **build_params)¶
Bases:
Module
Implementation of a LSTM for classification.
- forward(input_: LongTensor, hidden_states: Optional[Dict[int, FloatTensor]] = None, **kwargs) FloatTensor ¶
The forward pass of the model.
- Parameters:
- input_: torch.LongTensor
Current batch in the form of one-hot encodings.
- hidden_states: Optional[HiddenDict]
Dictionary of hidden and cell states by layer to initialize the model with at the first time step. If None, they will be initialized with zero vectors or the ones stored under last_hidden_states if available.
- Returns:
- torch.FloatTensor
Tensor of unnormalized output distributions for the current batch.
Obtain hidden representations for the current input.
- Parameters:
- input_: torch.LongTensor
Inputs ids for a sentence.
- Returns:
- torch.FloatTensor
Representation for the current sequence.
- get_logits(input_: LongTensor, *args, **kwargs) FloatTensor ¶
Get the logits for an input. Results in a tensor of size batch_size x seq_len x output_size or batch_size x num_predictions x seq_len x output_size depending on the model type. Used to create inputs for the uncertainty metrics defined in nlp_uncertainty_zoo.metrics.
- Parameters:
- input_: torch.LongTensor
(Batch of) Indexed input sequences.
- Returns:
- torch.FloatTensor
Logits for current input.
Define how the representation for an entire sequence is extracted from a number of hidden states. This is relevant in sequence classification. For example, this could be the last hidden state for a unidirectional LSTM or the first hidden state for a transformer, adding a pooler layer.
- Parameters:
- hidden: torch.FloatTensor
Hidden states of a model for a sequence.
- Returns:
- torch.FloatTensor
Representation for the current sequence.
Initialize all the hidden and cell states by zero vectors, for instance in the beginning of the training or after switching from test to training or vice versa.
- Parameters:
- batch_size: int
Size of current batch.
- device: Device
Device of the model.
- training: bool¶
- class nlp_uncertainty_zoo.models.lstm.LayerWiseLSTM(layers: List[Module], dropout: float, device: Union[device, str])¶
Bases:
Module
Model of a LSTM with a custom layer class.
- forward(input_: FloatTensor, hidden: Tuple[FloatTensor, FloatTensor]) Tuple[FloatTensor, Tuple[FloatTensor, FloatTensor]] ¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- training: bool¶