{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#| default_exp models.lstm"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#| hide\n",
"%load_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# LSTM"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The Long Short-Term Memory Recurrent Neural Network (`LSTM`), uses a multilayer `LSTM` encoder and an `MLP` decoder. It builds upon the LSTM-cell that improves the exploding and vanishing gradients of classic `RNN`'s. This network has been extensively used in sequential prediction tasks like language modeling, phonetic labeling, and forecasting. The predictions are obtained by transforming the hidden states into contexts $\\mathbf{c}_{[t+1:t+H]}$, that are decoded and adapted into $\\mathbf{\\hat{y}}_{[t+1:t+H],[q]}$ through MLPs.\n",
"\n",
"\\begin{align}\n",
" \\mathbf{h}_{t} &= \\textrm{LSTM}([\\mathbf{y}_{t},\\mathbf{x}^{(h)}_{t},\\mathbf{x}^{(s)}], \\mathbf{h}_{t-1})\\\\\n",
"\\mathbf{c}_{[t+1:t+H]}&=\\textrm{Linear}([\\mathbf{h}_{t}, \\mathbf{x}^{(f)}_{[:t+H]}]) \\\\ \n",
"\\hat{y}_{\\tau,[q]}&=\\textrm{MLP}([\\mathbf{c}_{\\tau},\\mathbf{x}^{(f)}_{\\tau}])\n",
"\\end{align}\n",
"\n",
"where $\\mathbf{h}_{t}$, is the hidden state for time $t$, $\\mathbf{y}_{t}$ is the input at time $t$ and $\\mathbf{h}_{t-1}$ is the hidden state of the previous layer at $t-1$, $\\mathbf{x}^{(s)}$ are static exogenous inputs, $\\mathbf{x}^{(h)}_{t}$ historic exogenous, $\\mathbf{x}^{(f)}_{[:t+H]}$ are future exogenous available at the time of the prediction.\n",
"\n",
"**References**
-[Jeffrey L. Elman (1990). \"Finding Structure in Time\".](https://onlinelibrary.wiley.com/doi/abs/10.1207/s15516709cog1402_1)
-[Haşim Sak, Andrew Senior, Françoise Beaufays (2014). \"Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition.\"](https://arxiv.org/abs/1402.1128)
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#| hide\n",
"from nbdev.showdoc import show_doc"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#| export\n",
"from typing import Optional\n",
"\n",
"import torch\n",
"import torch.nn as nn\n",
"\n",
"from neuralforecast.losses.pytorch import MAE\n",
"from neuralforecast.common._base_recurrent import BaseRecurrent\n",
"from neuralforecast.common._modules import MLP"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#| export\n",
"class LSTM(BaseRecurrent):\n",
" \"\"\" LSTM\n",
"\n",
" LSTM encoder, with MLP decoder.\n",
" The network has `tanh` or `relu` non-linearities, it is trained using \n",
" ADAM stochastic gradient descent. The network accepts static, historic \n",
" and future exogenous data.\n",
"\n",
" **Parameters:**
\n",
" `h`: int, forecast horizon.
\n",
" `input_size`: int, maximum sequence length for truncated train backpropagation. Default -1 uses all history.
\n",
" `inference_input_size`: int, maximum sequence length for truncated inference. Default -1 uses all history.
\n",
" `encoder_n_layers`: int=2, number of layers for the LSTM.
\n",
" `encoder_hidden_size`: int=200, units for the LSTM's hidden state size.
\n",
" `encoder_bias`: bool=True, whether or not to use biases b_ih, b_hh within LSTM units.
\n",
" `encoder_dropout`: float=0., dropout regularization applied to LSTM outputs.
\n",
" `context_size`: int=10, size of context vector for each timestamp on the forecasting window.
\n",
" `decoder_hidden_size`: int=200, size of hidden layer for the MLP decoder.
\n",
" `decoder_layers`: int=2, number of layers for the MLP decoder.
\n",
" `futr_exog_list`: str list, future exogenous columns.
\n",
" `hist_exog_list`: str list, historic exogenous columns.
\n",
" `stat_exog_list`: str list, static exogenous columns.
\n",
" `loss`: PyTorch module, instantiated train loss class from [losses collection](https://nixtla.github.io/neuralforecast/losses.pytorch.html).
\n",
" `valid_loss`: PyTorch module=`loss`, instantiated valid loss class from [losses collection](https://nixtla.github.io/neuralforecast/losses.pytorch.html).
\n",
" `max_steps`: int=1000, maximum number of training steps.
\n",
" `learning_rate`: float=1e-3, Learning rate between (0, 1).
\n",
" `num_lr_decays`: int=-1, Number of learning rate decays, evenly distributed across max_steps.
\n",
" `early_stop_patience_steps`: int=-1, Number of validation iterations before early stopping.
\n",
" `val_check_steps`: int=100, Number of training steps between every validation loss check.
\n",
" `batch_size`: int=32, number of differentseries in each batch.
\n",
" `valid_batch_size`: int=None, number of different series in each validation and test batch.
\n",
" `scaler_type`: str='robust', type of scaler for temporal inputs normalization see [temporal scalers](https://nixtla.github.io/neuralforecast/common.scalers.html).
\n",
" `random_seed`: int=1, random_seed for pytorch initializer and numpy generators.
\n",
" `num_workers_loader`: int=os.cpu_count(), workers to be used by `TimeSeriesDataLoader`.
\n",
" `drop_last_loader`: bool=False, if True `TimeSeriesDataLoader` drops last non-full batch.
\n",
" `alias`: str, optional, Custom name of the model.
\n",
" `optimizer`: Subclass of 'torch.optim.Optimizer', optional, user specified optimizer instead of the default choice (Adam).
\n",
" `optimizer_kwargs`: dict, optional, list of parameters used by the user specified `optimizer`.
\n",
" `**trainer_kwargs`: int, keyword trainer arguments inherited from [PyTorch Lighning's trainer](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.trainer.trainer.Trainer.html?highlight=trainer).
\n",
" \"\"\"\n",
" # Class attributes\n",
" SAMPLING_TYPE = 'recurrent'\n",
" \n",
" def __init__(self,\n",
" h: int,\n",
" input_size: int = -1,\n",
" inference_input_size: int = -1,\n",
" encoder_n_layers: int = 2,\n",
" encoder_hidden_size: int = 200,\n",
" encoder_bias: bool = True,\n",
" encoder_dropout: float = 0.,\n",
" context_size: int = 10,\n",
" decoder_hidden_size: int = 200,\n",
" decoder_layers: int = 2,\n",
" futr_exog_list = None,\n",
" hist_exog_list = None,\n",
" stat_exog_list = None,\n",
" loss = MAE(),\n",
" valid_loss = None,\n",
" max_steps: int = 1000,\n",
" learning_rate: float = 1e-3,\n",
" num_lr_decays: int = -1,\n",
" early_stop_patience_steps: int =-1,\n",
" val_check_steps: int = 100,\n",
" batch_size = 32,\n",
" valid_batch_size: Optional[int] = None,\n",
" scaler_type: str = 'robust',\n",
" random_seed = 1,\n",
" num_workers_loader = 0,\n",
" drop_last_loader = False,\n",
" optimizer=None,\n",
" optimizer_kwargs=None,\n",
" **trainer_kwargs):\n",
" super(LSTM, self).__init__(\n",
" h=h,\n",
" input_size=input_size,\n",
" inference_input_size=inference_input_size,\n",
" loss=loss,\n",
" valid_loss=valid_loss,\n",
" max_steps=max_steps,\n",
" learning_rate=learning_rate,\n",
" num_lr_decays=num_lr_decays,\n",
" early_stop_patience_steps=early_stop_patience_steps,\n",
" val_check_steps=val_check_steps,\n",
" batch_size=batch_size,\n",
" valid_batch_size=valid_batch_size,\n",
" scaler_type=scaler_type,\n",
" futr_exog_list=futr_exog_list,\n",
" hist_exog_list=hist_exog_list,\n",
" stat_exog_list=stat_exog_list,\n",
" num_workers_loader=num_workers_loader,\n",
" drop_last_loader=drop_last_loader,\n",
" random_seed=random_seed,\n",
" optimizer=optimizer,\n",
" optimizer_kwargs=optimizer_kwargs,\n",
" **trainer_kwargs\n",
" )\n",
"\n",
" # LSTM\n",
" self.encoder_n_layers = encoder_n_layers\n",
" self.encoder_hidden_size = encoder_hidden_size\n",
" self.encoder_bias = encoder_bias\n",
" self.encoder_dropout = encoder_dropout\n",
" \n",
" # Context adapter\n",
" self.context_size = context_size\n",
"\n",
" # MLP decoder\n",
" self.decoder_hidden_size = decoder_hidden_size\n",
" self.decoder_layers = decoder_layers\n",
"\n",
" self.futr_exog_size = len(self.futr_exog_list)\n",
" self.hist_exog_size = len(self.hist_exog_list)\n",
" self.stat_exog_size = len(self.stat_exog_list)\n",
" \n",
" # LSTM input size (1 for target variable y)\n",
" input_encoder = 1 + self.hist_exog_size + self.stat_exog_size\n",
"\n",
" # Instantiate model\n",
" self.hist_encoder = nn.LSTM(input_size=input_encoder,\n",
" hidden_size=self.encoder_hidden_size,\n",
" num_layers=self.encoder_n_layers,\n",
" bias=self.encoder_bias,\n",
" dropout=self.encoder_dropout,\n",
" batch_first=True)\n",
"\n",
" # Context adapter\n",
" self.context_adapter = nn.Linear(in_features=self.encoder_hidden_size + self.futr_exog_size * h,\n",
" out_features=self.context_size * h)\n",
"\n",
" # Decoder MLP\n",
" self.mlp_decoder = MLP(in_features=self.context_size + self.futr_exog_size,\n",
" out_features=self.loss.outputsize_multiplier,\n",
" hidden_size=self.decoder_hidden_size,\n",
" num_layers=self.decoder_layers,\n",
" activation='ReLU',\n",
" dropout=0.0)\n",
"\n",
" def forward(self, windows_batch):\n",
" \n",
" # Parse windows_batch\n",
" encoder_input = windows_batch['insample_y'] # [B, seq_len, 1]\n",
" futr_exog = windows_batch['futr_exog']\n",
" hist_exog = windows_batch['hist_exog']\n",
" stat_exog = windows_batch['stat_exog']\n",
"\n",
" # Concatenate y, historic and static inputs\n",
" # [B, C, seq_len, 1] -> [B, seq_len, C]\n",
" # Contatenate [ Y_t, | X_{t-L},..., X_{t} | S ]\n",
" batch_size, seq_len = encoder_input.shape[:2]\n",
" if self.hist_exog_size > 0:\n",
" hist_exog = hist_exog.permute(0,2,1,3).squeeze(-1) # [B, X, seq_len, 1] -> [B, seq_len, X]\n",
" encoder_input = torch.cat((encoder_input, hist_exog), dim=2)\n",
"\n",
" if self.stat_exog_size > 0:\n",
" stat_exog = stat_exog.unsqueeze(1).repeat(1, seq_len, 1) # [B, S] -> [B, seq_len, S]\n",
" encoder_input = torch.cat((encoder_input, stat_exog), dim=2)\n",
"\n",
" # RNN forward\n",
" hidden_state, _ = self.hist_encoder(encoder_input) # [B, seq_len, rnn_hidden_state]\n",
"\n",
" if self.futr_exog_size > 0:\n",
" futr_exog = futr_exog.permute(0,2,3,1)[:,:,1:,:] # [B, F, seq_len, 1+H] -> [B, seq_len, H, F]\n",
" hidden_state = torch.cat(( hidden_state, futr_exog.reshape(batch_size, seq_len, -1)), dim=2)\n",
"\n",
" # Context adapter\n",
" context = self.context_adapter(hidden_state)\n",
" context = context.reshape(batch_size, seq_len, self.h, self.context_size)\n",
"\n",
" # Residual connection with futr_exog\n",
" if self.futr_exog_size > 0:\n",
" context = torch.cat((context, futr_exog), dim=-1)\n",
"\n",
" # Final forecast\n",
" output = self.mlp_decoder(context)\n",
" output = self.loss.domain_map(output)\n",
" \n",
" return output"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"show_doc(LSTM)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"show_doc(LSTM.fit, name='LSTM.fit')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"show_doc(LSTM.predict, name='LSTM.predict')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Usage Example"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#| eval: false\n",
"import numpy as np\n",
"import pandas as pd\n",
"import pytorch_lightning as pl\n",
"import matplotlib.pyplot as plt\n",
"\n",
"from neuralforecast import NeuralForecast\n",
"from neuralforecast.models import LSTM\n",
"from neuralforecast.losses.pytorch import MQLoss, DistributionLoss\n",
"from neuralforecast.utils import AirPassengersPanel, AirPassengersStatic\n",
"from neuralforecast.tsdataset import TimeSeriesDataset, TimeSeriesLoader\n",
"\n",
"Y_train_df = AirPassengersPanel[AirPassengersPanel.ds=AirPassengersPanel['ds'].values[-12]].reset_index(drop=True) # 12 test\n",
"\n",
"nf = NeuralForecast(\n",
" models=[LSTM(h=12, input_size=-1,\n",
" loss=DistributionLoss(distribution='Normal', level=[80, 90]),\n",
" scaler_type='robust',\n",
" encoder_n_layers=2,\n",
" encoder_hidden_size=128,\n",
" context_size=10,\n",
" decoder_hidden_size=128,\n",
" decoder_layers=2,\n",
" max_steps=200,\n",
" futr_exog_list=['y_[lag12]'],\n",
" #hist_exog_list=['y_[lag12]'],\n",
" stat_exog_list=['airline1'],\n",
" )\n",
" ],\n",
" freq='M'\n",
")\n",
"nf.fit(df=Y_train_df, static_df=AirPassengersStatic)\n",
"Y_hat_df = nf.predict(futr_df=Y_test_df)\n",
"\n",
"Y_hat_df = Y_hat_df.reset_index(drop=False).drop(columns=['unique_id','ds'])\n",
"plot_df = pd.concat([Y_test_df, Y_hat_df], axis=1)\n",
"plot_df = pd.concat([Y_train_df, plot_df])\n",
"\n",
"plot_df = plot_df[plot_df.unique_id=='Airline1'].drop('unique_id', axis=1)\n",
"plt.plot(plot_df['ds'], plot_df['y'], c='black', label='True')\n",
"plt.plot(plot_df['ds'], plot_df['LSTM'], c='purple', label='mean')\n",
"plt.plot(plot_df['ds'], plot_df['LSTM-median'], c='blue', label='median')\n",
"plt.fill_between(x=plot_df['ds'][-12:], \n",
" y1=plot_df['LSTM-lo-90'][-12:].values, \n",
" y2=plot_df['LSTM-hi-90'][-12:].values,\n",
" alpha=0.4, label='level 90')\n",
"plt.legend()\n",
"plt.grid()\n",
"plt.plot()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "python3",
"language": "python",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}