{ "cells": [ { "cell_type": "code", "execution_count": null, "id": "30ebf8f9", "metadata": {}, "outputs": [], "source": [ "#| hide\n", "%load_ext autoreload\n", "%autoreload 2" ] }, { "attachments": {}, "cell_type": "markdown", "id": "43832dcf-0452-4503-aace-8f38e5e23723", "metadata": {}, "source": [ "# 🧠 NeuralForecast\n", "\n", "> **NeuralForecast** offers a large collection of neural forecasting models focused on their usability, and robustness. The models range from classic networks like `MLP`, `RNN`s to novel proven contributions like `NBEATS`, `NHITS`, `TFT` and other architectures." ] }, { "attachments": {}, "cell_type": "markdown", "id": "9bdd2aa5", "metadata": {}, "source": [ "## 🎊 Features\n", "\n", "* **Exogenous Variables**: Static, historic and future exogenous support.\n", "* **Forecast Interpretability**: Plot trend, seasonality and exogenous `NBEATS`, `NHITS`, `TFT`, `ESRNN` prediction components.\n", "* **Probabilistic Forecasting**: Simple model adapters for quantile losses and parametric distributions.\n", "* **Train and Evaluation Losses** Scale-dependent, percentage and scale independent errors, and parametric likelihoods.\n", "* **Automatic Model Selection** Parallelized automatic hyperparameter tuning, that efficiently searches best validation configuration.\n", "* **Simple Interface** Unified SKLearn Interface for `StatsForecast` and `MLForecast` compatibility.\n", "* **Model Collection**: Out of the box implementation of `MLP`, `LSTM`, `RNN`, `TCN`, `DilatedRNN`, `NBEATS`, `NHITS`, `ESRNN`, `Informer`, `TFT`, `PatchTST`, `VanillaTransformer`, `StemGNN` and `HINT`. See the entire [collection here](https://nixtla.github.io/neuralforecast/models.html)." ] }, { "cell_type": "markdown", "id": "5cebf377", "metadata": {}, "source": [ "## Why?\n", "\n", "There is a shared belief in Neural forecasting methods' capacity to improve our pipeline's accuracy and efficiency.\n", "\n", "Unfortunately, available implementations and published research are yet to realize neural networks' potential. They are hard to use and continuously fail to improve over statistical methods while being computationally prohibitive. For this reason, we created `NeuralForecast`, a library favoring proven accurate and efficient models focusing on their usability." ] }, { "cell_type": "markdown", "id": "50d28044", "metadata": {}, "source": [ "## 💻 Installation\n", "\n", "\n", "### PyPI\n", "\n", "You can install `NeuralForecast`'s *released version* from the Python package index [pip](https://pypi.org/project/neuralforecast/) with:\n", "\n", "```python\n", "pip install neuralforecast\n", "```\n", "\n", "(Installing inside a python virtualenvironment or a conda environment is recommended.)\n", "\n", "\n", "### Conda\n", "\n", "Also you can install `NeuralForecast`'s *released version* from [conda](https://anaconda.org/conda-forge/neuralforecast) with:\n", "\n", "```python\n", "conda install -c conda-forge neuralforecast\n", "```\n", "\n", "(Installing inside a python virtualenvironment or a conda environment is recommended.)\n", "\n", "### Dev Mode\n", "If you want to make some modifications to the code and see the effects in real time (without reinstalling), follow the steps below:\n", "\n", "```bash\n", "git clone https://github.com/Nixtla/neuralforecast.git\n", "cd neuralforecast\n", "pip install -e .\n", "```" ] }, { "cell_type": "markdown", "id": "d3aeb537", "metadata": {}, "source": [ "## How to Use" ] }, { "cell_type": "code", "execution_count": null, "id": "401a785d", "metadata": {}, "outputs": [], "source": [ "#| eval: false\n", "import numpy as np\n", "import pandas as pd\n", "from IPython.display import display, Markdown\n", "\n", "import matplotlib.pyplot as plt\n", "from neuralforecast import NeuralForecast\n", "from neuralforecast.models import NBEATS, NHITS\n", "from neuralforecast.utils import AirPassengersDF\n", "\n", "# Split data and declare panel dataset\n", "Y_df = AirPassengersDF\n", "Y_train_df = Y_df[Y_df.ds<='1959-12-31'] # 132 train\n", "Y_test_df = Y_df[Y_df.ds>'1959-12-31'] # 12 test\n", "\n", "# Fit and predict with NBEATS and NHITS models\n", "horizon = len(Y_test_df)\n", "models = [NBEATS(input_size=2 * horizon, h=horizon, max_steps=50),\n", " NHITS(input_size=2 * horizon, h=horizon, max_steps=50)]\n", "nf = NeuralForecast(models=models, freq='M')\n", "nf.fit(df=Y_train_df)\n", "Y_hat_df = nf.predict().reset_index()\n", "\n", "# Plot predictions\n", "fig, ax = plt.subplots(1, 1, figsize = (20, 7))\n", "Y_hat_df = Y_test_df.merge(Y_hat_df, how='left', on=['unique_id', 'ds'])\n", "plot_df = pd.concat([Y_train_df, Y_hat_df]).set_index('ds')\n", "\n", "plot_df[['y', 'NBEATS', 'NHITS']].plot(ax=ax, linewidth=2)\n", "\n", "ax.set_title('AirPassengers Forecast', fontsize=22)\n", "ax.set_ylabel('Monthly Passengers', fontsize=20)\n", "ax.set_xlabel('Timestamp [t]', fontsize=20)\n", "ax.legend(prop={'size': 15})\n", "ax.grid()" ] }, { "attachments": {}, "cell_type": "markdown", "id": "7cb3919a", "metadata": {}, "source": [ "## 🙏 How to Cite" ] }, { "attachments": {}, "cell_type": "markdown", "id": "2c669ec7", "metadata": {}, "source": [ "If you enjoy or benefit from using these Python implementations, a citation to the repository will be greatly appreciated." ] }, { "attachments": {}, "cell_type": "markdown", "id": "0fed17d7", "metadata": {}, "source": [ "```\n", "@misc{olivares2022library_neuralforecast,\n", " author={Kin G. Olivares and\n", " Cristian Challú and\n", " Federico Garza and\n", " Max Mergenthaler Canseco and\n", " Artur Dubrawski},\n", " title = {{NeuralForecast}: User friendly state-of-the-art neural forecasting models.},\n", " year={2022},\n", " howpublished={{PyCon} Salt Lake City, Utah, US 2022},\n", " url={https://github.com/Nixtla/neuralforecast}\n", "}\n", "```" ] }, { "cell_type": "markdown", "id": "617b9dca", "metadata": {}, "source": [] } ], "metadata": { "kernelspec": { "display_name": "python3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 5 }