{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Data Requirements\n", "\n", "> This section explains the data requirements for `TimeGPT`. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#| hide\n", "from nixtla.utils import colab_badge" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "[![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Nixtla/nixtla/blob/main/nbs/docs/getting-started/5_data_requirements.ipynb)" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "#| echo: false\n", "colab_badge('docs/getting-started/5_data_requirements')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`TimeGPT` accepts `pandas` and `polars` dataframes in [long format](https://www.theanalysisfactor.com/wide-and-long-data/#comments) with the following necessary columns: \n", "\n", "- `ds` (timestamp): timestamp in format `YYYY-MM-DD` or `YYYY-MM-DD HH:MM:SS`. \n", "- `y` (numeric): The target variable to forecast. \n", "\n", "(Optionally, you can also pass a DataFrame without the `ds` column as long as it has DatetimeIndex)\n", "\n", "`TimeGPT` also works with distributed dataframes like `dask`, `spark` and `ray`. \n", "\n", "You can also include exogenous features in the DataFrame as additional columns. For more information, follow this [tutorial](https://docs.nixtla.io/docs/tutorials-exogenous_variables).\n", "\n", "Below is an example of a valid input dataframe for `TimeGPT`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
timestampvalue
01949-01-01112
11949-02-01118
21949-03-01132
31949-04-01129
41949-05-01121
\n", "
" ], "text/plain": [ " timestamp value\n", "0 1949-01-01 112\n", "1 1949-02-01 118\n", "2 1949-03-01 132\n", "3 1949-04-01 129\n", "4 1949-05-01 121" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import pandas as pd \n", "\n", "df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv')\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that in this example, the `ds` column is named `timestamp` and the `y` column is named `value`. You can either:\n", "\n", "1. Rename the columns to `ds` and `y`, respectively, or\n", "\n", "2. Keep the current column names and specify them when using any method from the `NixtlaClient` class with the `time_col` and `target_col` arguments. \n", "\n", "For example, when using the `forecast` method from the `NixtlaClient` class, you must instantiate the class and then specify the columns names as follows. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from nixtla import NixtlaClient\n", "\n", "nixtla_client = NixtlaClient(\n", " api_key = 'my_api_key_provided_by_nixtla'\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "#| hide\n", "nixtla_client = NixtlaClient()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:nixtla.nixtla_client:Validating inputs...\n", "INFO:nixtla.nixtla_client:Inferred freq: MS\n", "INFO:nixtla.nixtla_client:Preprocessing dataframes...\n", "INFO:nixtla.nixtla_client:Querying model metadata...\n", "INFO:nixtla.nixtla_client:Restricting input...\n", "INFO:nixtla.nixtla_client:Calling Forecast Endpoint...\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
timestampTimeGPT
01961-01-01437.83792
11961-02-01426.06270
21961-03-01463.11655
31961-04-01478.24450
41961-05-01505.64648
\n", "
" ], "text/plain": [ " timestamp TimeGPT\n", "0 1961-01-01 437.83792\n", "1 1961-02-01 426.06270\n", "2 1961-03-01 463.11655\n", "3 1961-04-01 478.24450\n", "4 1961-05-01 505.64648" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fcst = nixtla_client.forecast(df=df, h=12, time_col='timestamp', target_col='value')\n", "fcst.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, the `NixtlaClient` is infereing the frequency, but you can explicitly specify it with the `freq` argument.\n", "\n", "\n", "To learn more about how to instantiate the `NixtlaClient` class, refer to the [TimeGPT Quickstart](https://docs.nixtla.io/docs/getting-started-timegpt_quickstart)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Multiple Series " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you're working with multiple time series, make sure that each series has a unique identifier. You can name this column `unique_id` or specify its name using the `id_col` argument when calling any method from the `NixtlaClient` class. This column should be a string, integer, or category.\n", "\n", "In this example, we have five series representing hourly electricity prices in five different markets. The columns already have the default names, so it's unnecessary to specify the `id_col`, `time_col`, or `target_col` arguments. If your columns have different names, specify these arguments as required." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
unique_iddsy
0BE2016-10-22 00:00:0070.00
1BE2016-10-22 01:00:0037.10
2BE2016-10-22 02:00:0037.10
3BE2016-10-22 03:00:0044.75
4BE2016-10-22 04:00:0037.10
\n", "
" ], "text/plain": [ " unique_id ds y\n", "0 BE 2016-10-22 00:00:00 70.00\n", "1 BE 2016-10-22 01:00:00 37.10\n", "2 BE 2016-10-22 02:00:00 37.10\n", "3 BE 2016-10-22 03:00:00 44.75\n", "4 BE 2016-10-22 04:00:00 37.10" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/electricity-short.csv')\n", "df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:nixtla.nixtla_client:Validating inputs...\n", "INFO:nixtla.nixtla_client:Inferred freq: h\n", "INFO:nixtla.nixtla_client:Preprocessing dataframes...\n", "INFO:nixtla.nixtla_client:Querying model metadata...\n", "INFO:nixtla.nixtla_client:Restricting input...\n", "INFO:nixtla.nixtla_client:Calling Forecast Endpoint...\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
unique_iddsTimeGPT
0BE2016-12-31 00:00:0045.190582
1BE2016-12-31 01:00:0043.244987
2BE2016-12-31 02:00:0041.958897
3BE2016-12-31 03:00:0039.796680
4BE2016-12-31 04:00:0039.204865
\n", "
" ], "text/plain": [ " unique_id ds TimeGPT\n", "0 BE 2016-12-31 00:00:00 45.190582\n", "1 BE 2016-12-31 01:00:00 43.244987\n", "2 BE 2016-12-31 02:00:00 41.958897\n", "3 BE 2016-12-31 03:00:00 39.796680\n", "4 BE 2016-12-31 04:00:00 39.204865" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fcst = nixtla_client.forecast(df=df, h=24) # use id_col, time_col and target_col here if needed. \n", "fcst.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When working with a large number of time series, consider using a [distributed computing framework](https://docs.nixtla.io/docs/tutorials-computing_at_scale) to handle the data efficiently. `TimeGPT` supports frameworks such as [Spark](https://docs.nixtla.io/docs/tutorials-spark), [Dask](https://docs.nixtla.io/docs/tutorials-dask), and [Ray](https://docs.nixtla.io/docs/tutorials-ray)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exogenous Variables " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`TimeGPT` also accepts exogenous variables. You can add exogenous variables to your dataframe by including additional columns after the `y` column." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
unique_iddsyExogenous1Exogenous2day_0day_1day_2day_3day_4day_5day_6
0BE2016-10-22 00:00:0070.0057253.049593.00.00.00.00.00.01.00.0
1BE2016-10-22 01:00:0037.1051887.046073.00.00.00.00.00.01.00.0
2BE2016-10-22 02:00:0037.1051896.044927.00.00.00.00.00.01.00.0
3BE2016-10-22 03:00:0044.7548428.044483.00.00.00.00.00.01.00.0
4BE2016-10-22 04:00:0037.1046721.044338.00.00.00.00.00.01.00.0
\n", "
" ], "text/plain": [ " unique_id ds y Exogenous1 Exogenous2 day_0 day_1 \\\n", "0 BE 2016-10-22 00:00:00 70.00 57253.0 49593.0 0.0 0.0 \n", "1 BE 2016-10-22 01:00:00 37.10 51887.0 46073.0 0.0 0.0 \n", "2 BE 2016-10-22 02:00:00 37.10 51896.0 44927.0 0.0 0.0 \n", "3 BE 2016-10-22 03:00:00 44.75 48428.0 44483.0 0.0 0.0 \n", "4 BE 2016-10-22 04:00:00 37.10 46721.0 44338.0 0.0 0.0 \n", "\n", " day_2 day_3 day_4 day_5 day_6 \n", "0 0.0 0.0 0.0 1.0 0.0 \n", "1 0.0 0.0 0.0 1.0 0.0 \n", "2 0.0 0.0 0.0 1.0 0.0 \n", "3 0.0 0.0 0.0 1.0 0.0 \n", "4 0.0 0.0 0.0 1.0 0.0 " ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/electricity-short-with-ex-vars.csv')\n", "df.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When using exogenous variables, you also need to provide its future values. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
unique_iddsExogenous1Exogenous2day_0day_1day_2day_3day_4day_5day_6
0BE2016-12-31 00:00:0070318.064108.00.00.00.00.00.01.00.0
1BE2016-12-31 01:00:0067898.062492.00.00.00.00.00.01.00.0
2BE2016-12-31 02:00:0068379.061571.00.00.00.00.00.01.00.0
3BE2016-12-31 03:00:0064972.060381.00.00.00.00.00.01.00.0
4BE2016-12-31 04:00:0062900.060298.00.00.00.00.00.01.00.0
\n", "
" ], "text/plain": [ " unique_id ds Exogenous1 Exogenous2 day_0 day_1 day_2 \\\n", "0 BE 2016-12-31 00:00:00 70318.0 64108.0 0.0 0.0 0.0 \n", "1 BE 2016-12-31 01:00:00 67898.0 62492.0 0.0 0.0 0.0 \n", "2 BE 2016-12-31 02:00:00 68379.0 61571.0 0.0 0.0 0.0 \n", "3 BE 2016-12-31 03:00:00 64972.0 60381.0 0.0 0.0 0.0 \n", "4 BE 2016-12-31 04:00:00 62900.0 60298.0 0.0 0.0 0.0 \n", "\n", " day_3 day_4 day_5 day_6 \n", "0 0.0 0.0 1.0 0.0 \n", "1 0.0 0.0 1.0 0.0 \n", "2 0.0 0.0 1.0 0.0 \n", "3 0.0 0.0 1.0 0.0 \n", "4 0.0 0.0 1.0 0.0 " ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "future_ex_vars_df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/electricity-short-future-ex-vars.csv')\n", "future_ex_vars_df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "INFO:nixtla.nixtla_client:Validating inputs...\n", "INFO:nixtla.nixtla_client:Inferred freq: h\n", "INFO:nixtla.nixtla_client:Preprocessing dataframes...\n", "INFO:nixtla.nixtla_client:Using future exogenous features: ['Exogenous1', 'Exogenous2', 'day_0', 'day_1', 'day_2', 'day_3', 'day_4', 'day_5', 'day_6']\n", "INFO:nixtla.nixtla_client:Calling Forecast Endpoint...\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
unique_iddsTimeGPT
0BE2016-12-31 00:00:0051.632830
1BE2016-12-31 01:00:0045.750877
2BE2016-12-31 02:00:0039.650543
3BE2016-12-31 03:00:0034.000072
4BE2016-12-31 04:00:0033.785370
\n", "
" ], "text/plain": [ " unique_id ds TimeGPT\n", "0 BE 2016-12-31 00:00:00 51.632830\n", "1 BE 2016-12-31 01:00:00 45.750877\n", "2 BE 2016-12-31 02:00:00 39.650543\n", "3 BE 2016-12-31 03:00:00 34.000072\n", "4 BE 2016-12-31 04:00:00 33.785370" ] }, "execution_count": null, "metadata": {}, "output_type": "execute_result" } ], "source": [ "fcst = nixtla_client.forecast(df=df, X_df=future_ex_vars_df, h=24)\n", "fcst.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To learn more about how to use exogenous variables with `TimeGPT`, consult the [Exogenous Variables](https://docs.nixtla.io/docs/tutorials-exogenous_variables) tutorial. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Important Considerations" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When using `TimeGPT`, the data cannot contain missing values. This means that for every series, there should be no gaps in the timestamps and no missing values in the target variable. \n", "\n", "For more, please refer to the tutorial on [Dealing with Missing Values in TimeGPT](https://docs.nixtla.io/docs/tutorials-dealing_with_missing_values_in_timegpt). \n", "\n", "### Minimum Data Requirements (for AzureAI)\n", "\n", "`TimeGPT` currently supports any amount of data for generating point forecasts. That is, the minimum size per series to expect results from this call `nixtla_client.forecast(df=df, h=h, freq=freq)` is one, regardless of the frequency.\n", "\n", "For Azure AI, when using the arguments `level`, `finetune_steps`, `X_df` (exogenous variables), or `add_history`, the API requires a minimum number of data points depending on the frequency. Here are the minimum sizes for each frequency:\n", "\n", "
\n", "\n", "| Frequency | Minimum Size |\n", "|--------------------------|--------------|\n", "| Hourly and subhourly (e.g., \"H\", \"min\", \"15T\") | 1008 |\n", "| Daily (\"D\") | 300 |\n", "| Weekly (e.g., \"W-MON\",..., \"W-SUN\") | 64 |\n", "| Monthly and other frequencies (e.g., \"M\", \"MS\", \"Y\") | 48 |\n", "\n", "
\n", "\n", "For cross-validation, you need to consider these numbers as well as the forecast horizon (`h`), the number of windows (`n_windows`), and the gap between windows (`step_size`). Thus, the minimum number of observations per series in this case would be determined by the following relationship:\n", "\n", "
\n", "\n", "Minimum number described previously + h + step_size + (n_windows - 1)\n", "\n", "
\n" ] } ], "metadata": { "kernelspec": { "display_name": "python3", "language": "python", "name": "python3" } }, "nbformat": 4, "nbformat_minor": 4 }