Commit f42429f6 authored by bailuo's avatar bailuo
Browse files

readme

parents
---
title: "Multiple Series Forecasting"
description: "Learn how to forecast multiple time series at once with TimeGPT."
icon: "chart-line-up"
---
# Multiple Series Forecasting with TimeGPT
TimeGPT can concurrently forecast multiple series at once. To do this, you must provide a DataFrame with multiple unique values defined under the `unique_id` column.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Nixtla/nixtla/blob/main/nbs/docs/capabilities/forecast/06_multiple_series.ipynb)
<Info>
TimeGPT is a powerful forecasting solution that supports simultaneous predictions for multiple time series. This guide will walk you through setting up your Nixtla Client, loading data, and generating forecasts.
</Info>
## How It Works
<CardGroup>
<Card title="Key Concept" icon="lightbulb">
Forecasting multiple series requires each observation to have a unique identifier under the `unique_id` column. TimeGPT automatically handles each series individually and returns forecasts for every unique series in your dataset.
</Card>
</CardGroup>
<Steps>
<Step title="1. Import Required Libraries">
<CodeGroup>
```python Import Libraries
import pandas as pd
from nixtla import NixtlaClient
```
</CodeGroup>
</Step>
<Step title="2. Initialize the Nixtla Client">
Choose between the default Nixtla endpoint or an Azure AI endpoint.
<Tabs>
<Tab title="Default Endpoint">
<CodeGroup>
```python Initialize Nixtla Client (Default)
nixtla_client = NixtlaClient(
# defaults to os.environ.get("NIXTLA_API_KEY")
api_key='my_api_key_provided_by_nixtla'
)
```
</CodeGroup>
</Tab>
<Tab title="Azure AI Endpoint">
<CodeGroup>
```python Initialize Nixtla Client (Azure AI)
nixtla_client = NixtlaClient(
base_url="your azure ai endpoint",
api_key="my_api_key_provided_by_nixtla"
)
```
</CodeGroup>
</Tab>
</Tabs>
</Step>
<Step title="3. Prepare Your Data">
<CodeGroup>
```python Load Data
# Load data
df = pd.read_csv(
'https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/electricity-short.csv'
)
```
</CodeGroup>
</Step>
<Step title="4. Forecast Multiple Series">
<CodeGroup>
```python Forecast Multiple Series
# Forecast multiple series
forecast_df = nixtla_client.forecast(
df=df,
h=24
)
```
</CodeGroup>
</Step>
</Steps>
<AccordionGroup>
<Accordion title="View Forecast Logs" icon="terminal">
```bash Log Output
INFO:nixtla.nixtla_client:Validating inputs...
INFO:nixtla.nixtla_client:Preprocessing dataframes...
INFO:nixtla.nixtla_client:Inferred freq: H
INFO:nixtla.nixtla_client:Restricting input...
INFO:nixtla.nixtla_client:Calling Forecast Endpoint...
```
</Accordion>
</AccordionGroup>
<Info>
**Available models in Azure AI**
If using an Azure AI endpoint, set the `model` parameter explicitly to `"azureai"`:
</Info>
<CodeGroup>
```python Azure AI Model Setting
nixtla_client.forecast(
...,
model="azureai"
)
```
</CodeGroup>
<CardGroup>
<Card title="Choosing the Right Model" icon="gear">
If you're using the public API, two models are supported: `timegpt-1` and `timegpt-1-long-horizon`.
The default is `timegpt-1`. Check out the [long horizon tutorial](https://docs.nixtla.io/docs/tutorials-long_horizon_forecasting) to learn when and how to apply `timegpt-1-long-horizon`.
</Card>
</CardGroup>
For further details, visit the detailed tutorial:
[Multiple series forecasting](https://docs.nixtla.io/docs/tutorials-multiple_series_forecasting).
\ No newline at end of file
---
title: "Prediction Intervals"
description: "Learn how to use the level parameter to generate prediction intervals that quantify forecast uncertainty."
icon: "chart-candlestick"
---
<CardGroup>
<Card title="What are Prediction Intervals?" icon="circle-info">
Prediction intervals measure the uncertainty around forecasted values. By specifying a confidence level, you can visualize the range in which future observations are expected to fall.
</Card>
<Card title="Key Parameter: level" icon="gear">
The **level** parameter accepts values between 0 and 100 (including decimals). For example, `[80]` represents an 80% confidence interval.
</Card>
</CardGroup>
## Overview
Use the `forecast` method's **level** parameter to generate prediction intervals. This helps quantify the uncertainty around your forecasts.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Nixtla/nixtla/blob/main/nbs/docs/capabilities/forecast/10_prediction_intervals.ipynb)
<Steps>
<Step title="Step 1: Import Dependencies">
```python Import Dependencies
import pandas as pd
from nixtla import NixtlaClient
```
</Step>
<Step title="Step 2: Initialize NixtlaClient">
```python Initialize NixtlaClient
nixtla_client = NixtlaClient(
# defaults to os.environ.get("NIXTLA_API_KEY")
api_key='my_api_key_provided_by_nixtla'
)
```
</Step>
<Step title="(Optional) Use an Azure AI Endpoint">
<AccordionGroup>
<Accordion title="Configuring an Azure AI Endpoint">
<Check>
**Use an Azure AI endpoint**
To use an Azure AI endpoint, set the `base_url` argument as follows:
</Check>
```python Azure AI Endpoint Configuration
nixtla_client = NixtlaClient(
base_url="your azure ai endpoint",
api_key="your api_key"
)
```
</Accordion>
</AccordionGroup>
</Step>
<Step title="Step 3: Load Dataset">
```python Load Dataset
df = pd.read_csv(
"https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv"
)
```
</Step>
<Step title="Step 4: Generate Forecast with an 80% Interval">
```python Generate 80% Interval Forecast
forecast_df = nixtla_client.forecast(
df=df,
h=12,
time_col='timestamp',
target_col='value',
level=[80]
)
```
</Step>
<Step title="Step 5: Plot Predictions and Intervals">
```python Plot Forecast and Intervals
nixtla_client.plot(
df=df,
forecasts_df=forecast_df,
time_col='timestamp',
target_col='value',
level=[80]
)
```
</Step>
</Steps>
<Info>
Logs indicate the validation and preprocessing steps, along with the inferred data frequency:
</Info>
<Accordion title="Forecasting Log Output">
```bash Log Output
INFO:nixtla.nixtla_client:Validating inputs...
INFO:nixtla.nixtla_client:Preprocessing dataframes...
INFO:nixtla.nixtla_client:Inferred freq: MS
INFO:nixtla.nixtla_client:Restricting input...
INFO:nixtla.nixtla_client:Calling Forecast Endpoint...
```
</Accordion>
<Frame caption="Forecast with an 80% Prediction Interval">
![Forecast](https://raw.githubusercontent.com/Nixtla/nixtla/readme_docs/nbs/_docs/docs/capabilities/forecast/10_prediction_intervals_files/figure-markdown_strict/cell-10-output-2.png)
</Frame>
<AccordionGroup>
<Accordion title="Using Azure AI Models">
<Info>
**Available Models in Azure AI**
If you are using an Azure AI endpoint, set the `model` parameter to `"azureai"`:
</Info>
```python Azure AI Models
nixtla_client.forecast(..., model="azureai")
```
The public API supports two models:
- `timegpt-1` (default)
- `timegpt-1-long-horizon`
See [this tutorial](https://docs.nixtla.io/docs/tutorials-long_horizon_forecasting) for guidance on using **timegpt-1-long-horizon**.
</Accordion>
</AccordionGroup>
<Info>
For more information on uncertainty estimation, refer to the tutorials about [quantile forecasts](https://docs.nixtla.io/docs/tutorials-quantile_forecasts) and [prediction intervals](https://docs.nixtla.io/docs/tutorials-prediction_intervals).
</Info>
\ No newline at end of file
---
title: "Forecasting Quickstart"
description: "Get started quickly with TimeGPT forecasting using the Nixtla API."
icon: "rocket"
---
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Nixtla/nixtla/blob/main/nbs/docs/capabilities/forecast/01_quickstart.ipynb)
# Quickstart
TimeGPT makes forecasting straightforward with the `forecast` method in the Nixtla API. Pass in your DataFrame, specify the time and target columns, and call `forecast`. You can also visualize results with the `plot` method.
<Info>
Detailed guidance on data requirements is available [here](https://docs.nixtla.io/docs/getting-started-data_requirements).
</Info>
<Steps>
<Step title="1. Install & Import Dependencies">
Make sure you have the latest Nixtla Client installed, then import the required libraries:
```bash Nixtla Client Installation
pip install nixtla
```
```python Import Libraries
import pandas as pd
from nixtla import NixtlaClient
```
</Step>
<Step title="2. Initialize the Nixtla Client">
<Tabs>
<Tab title="Standard Usage">
<Check>
Provide your API key from Nixtla to authenticate:
</Check>
```python Nixtla Client Standard Initialization
nixtla_client = NixtlaClient(
# defaults to os.environ.get("NIXTLA_API_KEY")
api_key='my_api_key_provided_by_nixtla'
)
```
</Tab>
<Tab title="Using Azure AI Endpoint">
<Check>
Use an Azure AI endpoint<br/>
If you'd like to use Azure AI, set the `base_url` to your Azure endpoint:
</Check>
```python Nixtla Client Azure AI Endpoint
nixtla_client = NixtlaClient(
base_url="your azure ai endpoint",
api_key="your api_key"
)
```
</Tab>
</Tabs>
</Step>
<Step title="3. Load Data & Create Forecast">
```python Load Data and Run Forecast
# Read data
df = pd.read_csv("https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv")
# Forecast for the next 12 time steps
forecast_df = nixtla_client.forecast(
df=df,
h=12,
time_col='timestamp',
target_col="value"
)
```
</Step>
<Step title="4. Visualize Predictions">
```python Plot Forecast Results
# Plot predictions
nixtla_client.plot(
df=df,
forecasts_df=forecast_df,
time_col='timestamp',
target_col='value'
)
```
</Step>
</Steps>
<Step title="Sample Logs (Optional)">
Below is an example of log output when running a forecast:
<Accordion title="Forecast Log Output">
```bash Forecast Process Logs
INFO:nixtla.nixtla_client:Validating inputs...
INFO:nixtla.nixtla_client:Preprocessing dataframes...
INFO:nixtla.nixtla_client:Inferred freq: MS
INFO:nixtla.nixtla_client:Restricting input...
INFO:nixtla.nixtla_client:Calling Forecast Endpoint...
```
</Accordion>
</Step>
<Frame caption="TimeGPT Forecast Plot">
![Forecast Plot](https://raw.githubusercontent.com/Nixtla/nixtla/readme_docs/nbs/_docs/docs/capabilities/forecast/01_quickstart_files/figure-markdown_strict/cell-10-output-2.png)
</Frame>
<Info>
**Available models in Azure AI**<br/>
To use an Azure AI endpoint for anomaly detection, set the `model` parameter to `"azureai"`:
```python Azure AI Anomaly Detection
nixtla_client.detect_anomalies(
...,
model="azureai"
)
```
</Info>
<CardGroup cols={2}>
<Card title="timegpt-1">
Default option for general forecasting needs.
</Card>
<Card title="timegpt-1-long-horizon">
Optimized for extended forecast horizons. [Learn more here](https://docs.nixtla.io/docs/tutorials-long_horizon_forecasting).
</Card>
</CardGroup>
\ No newline at end of file
---
title: "AzureAI"
description: "Guide to deploying and using the TimeGEN-1 model as an Azure AI endpoint."
icon: "cloud"
---
<Info>
**Azure Deployment Note**
The foundational models for time series developed by Nixtla can be deployed directly to Azure subscriptions. This guide explains how to quickly start using TimeGEN-1 as an Azure AI endpoint. If you currently use the `nixtla` library, the Azure deployment works as a drop-in replacementsimply adjust the client parameters (**endpoint URL**, **API key**, and **model name**).
</Info>
## Deploying TimeGEN-1
<AccordionGroup>
<Accordion title="Overview">
TimeGEN-1 is Nixtlas foundation AI model for time series. You can deploy it to your Azure subscription through the Azure Portal or via CLI. Once deployed, it becomes accessible as an Azure AI endpoint.
</Accordion>
<Accordion title="Prerequisites">
- An active Azure subscription with permissions to create AI endpoints.
- Familiarity with the Azure Portal or Azure CLI for creating and managing deployments.
- Basic understanding of Nixtlas Python client library (optional but recommended).
</Accordion>
</AccordionGroup>
<Frame caption="Time Gen on the Azure Portal">
![Azure Portal Example](/images/docs/nixtla-announcement.png)
</Frame>
## Using the Model
After you have successfully deployed TimeGEN-1 and ensured you have permission to access its endpoint, you can interact with the model as you would with a standard Nixtla endpoint.
<Check>
Ensure you have your deployment URL and API key ready before proceeding.
</Check>
<AccordionGroup>
<Accordion title="Configure Environment Variables">
Define the following environment variables in your local or hosted environment:
| Environment Variable | Description | Format / Example |
|-----------------------------|-------------------------------|-------------------------------------------------|
| `AZURE_AI_NIXTLA_BASE_URL` | Your API URL | `https://your-endpoint.inference.ai.azure.com/` |
| `AZURE_AI_NIXTLA_API_KEY` | Your Azure AI authentication API key | `0000000000000000000000` |
</Accordion>
</AccordionGroup>
## How to Use
<Steps>
<Steps title="Install the Nixtla Client">
```bash Nixtla Client Installation
pip install nixtla
```
This installs the official Nixtla Python client library so you can make forecast requests to your Azure AI endpoint.
</Steps>
<Steps title="Set Up Your Environment">
Make sure you have the following environment variables properly configured:
- `AZURE_AI_NIXTLA_BASE_URL`
- `AZURE_AI_NIXTLA_API_KEY`
</Steps>
<Steps title="Initialize the Nixtla Client">
```python Nixtla Client Initialization
import os
from nixtla import NixtlaClient
base_url = os.environ["AZURE_AI_NIXTLA_BASE_URL"]
api_key = os.environ["AZURE_AI_NIXTLA_API_KEY"]
model = "azureai"
nixtla_client = NixtlaClient(
api_key=api_key,
base_url=base_url
)
```
Here, we create a new client instance using your Azure endpoint URL and API key.
</Steps>
<Steps title="Make a Forecast Request">
```python Forecast Request Example
# Example forecast call; replace "..." with your actual parameters
nixtla_client.forecast(
...,
model=model,
)
```
Replace the ellipsis (**...**) with your specific forecasting parameters and then call the endpoint to get predictions.
</Steps>
</Steps>
<CardGroup cols={2}>
<Card title="Key Concept: Drop-In Replacement">
Because TimeGEN-1 on Azure uses the same API structure as the Nixtla library, you only need to switch out the **base URL**, **API key**, and **model name**. Your workflow remains unchanged.
</Card>
<Card title="Key Concept: Seamless Integration">
Deploying TimeGEN-1 to Azure allows you to leverage Azures scalability, security, and management tools directly for your time series forecasting needs without altering core application logic.
</Card>
</CardGroup>
<Info>
**Tip:** Remember that you can use any Azure-supported authentication or security measures to further protect your endpoint, such as Azure Key Vault for managing secrets or role-based access control for restricting usage.
</Info>
\ No newline at end of file
---
title: "Development"
description: "Preview changes locally to update your docs"
---
<Info>
**Prerequisite**: Please install Node.js (version 19 or higher) before proceeding.
Please upgrade to `docs.json` before proceeding and delete the legacy `mint.json` file.
</Info>
Follow these steps to install and run Mintlify on your operating system:
**Step 1**: Install Mintlify:
<CodeGroup>
```bash npm
npm i -g mintlify
```
```bash yarn
yarn global add mintlify
```
</CodeGroup>
**Step 2**: Navigate to the docs directory (where the `docs.json` file is located) and execute the following command:
```bash
mintlify dev
```
A local preview of your documentation will be available at `http://localhost:3000`.
### Custom Ports
By default, Mintlify uses port 3000. You can customize the port Mintlify runs on by using the `--port` flag. To run Mintlify on port 3333, for instance, use this command:
```bash
mintlify dev --port 3333
```
If you attempt to run Mintlify on a port that's already in use, it will use the next available port:
```md
Port 3000 is already in use. Trying 3001 instead.
```
## Mintlify Versions
Please note that each CLI release is associated with a specific version of Mintlify. If your local website doesn't align with the production version, please update the CLI:
<CodeGroup>
```bash npm
npm i -g mintlify@latest
```
```bash yarn
yarn global upgrade mintlify
```
</CodeGroup>
## Validating Links
The CLI can assist with validating reference links made in your documentation. To identify any broken links, use the following command:
```bash
mintlify broken-links
```
## Deployment
<Tip>
Unlimited editors available under the [Pro Plan](https://mintlify.com/pricing)
and above.
</Tip>
If the deployment is successful, you should see Checks passed
## Code Formatting
We suggest using extensions on your IDE to recognize and format MDX. If you're a VSCode user, consider the [MDX VSCode extension](https://marketplace.visualstudio.com/items?itemName=unifiedjs.vscode-mdx) for syntax highlighting, and [Prettier](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode) for code formatting.
## Troubleshooting
<AccordionGroup>
<Accordion title='Error: Could not load the "sharp" module using the darwin-arm64 runtime'>
This may be due to an outdated version of node. Try the following:
1. Remove the currently-installed version of mintlify: `npm remove -g mintlify`
2. Upgrade to Node v19 or higher.
3. Reinstall mintlify: `npm install -g mintlify`
</Accordion>
<Accordion title="Issue: Encountering an unknown error">
Solution: Go to the root of your device and delete the ~/.mintlify folder. Afterwards, run `mintlify dev` again.
</Accordion>
</AccordionGroup>
Curious about what changed in the CLI version? [Check out the CLI changelog.](https://www.npmjs.com/package/mintlify?activeTab=versions)
---
title: "Changelog"
description: "Complete list of changes for each version of the Nixtla client."
icon: "clipboard"
---
## Changelog Overview
Below you’ll find the complete list of changes for each version of the Nixtla client. Expand any version to see details about new features, improvements, changes, or deprecations, along with links to full release notes.
<AccordionGroup>
<Accordion title="Version 0.6.6">
### Feature Enhancements
<Info>
**Online anomaly detection**
We introduce the `online_anomaly_detection` method, which allows you to define a `detection_size` on which to look for anomalies.
</Info>
[See full changelog here](https://github.com/Nixtla/nixtla/releases/v0.6.6)
</Accordion>
<Accordion title="Version 0.6.5">
### Feature Enhancements
<CardGroup>
<Card title="Persisting fine-tuned models">
You can now run an isolated fine-tuning process, save the model, and use it afterward in all of our methods:
<br />
<Steps>
<Steps title="Fine-tune the model" />
<Steps title="Save the model" />
<Steps title="Use it in forecast, cross_validation, or detect_anomalies" />
</Steps>
</Card>
<Card title="zstd compression">
All requests above 1MB are automatically compressed using
[zstd](https://github.com/facebook/zstd), which helps when sending large data volumes or with slower connections.
</Card>
<Card title="Refit argument in `cross_validation`">
Set `refit=False` to fine-tune the model only on the first window in
`cross_validation`. This significantly decreases computation time.
</Card>
</CardGroup>
[See full changelog here](https://github.com/Nixtla/nixtla/releases/v0.6.5)
</Accordion>
<Accordion title="Version 0.6.4">
### Feature Enhancements
<CardGroup>
<Card title="Integer and custom pandas frequencies">
The client now supports integer timestamps and frequencies, and custom pandas timestamps
(including [CustomBusinessHour](https://pandas.pydata.org/docs/reference/api/pandas.tseries.offsets.CustomBusinessHour.html)).
</Card>
<Card title="Usage method">
You can programmatically retrieve your current API call count and limits using the new `usage` method.
</Card>
<Card title="Historic exogenous in cross validation">
The `cross_validation` method now accepts the `hist_exog_list` parameter, enabling definition of historical exogenous features.
</Card>
</CardGroup>
[See full changelog here](https://github.com/Nixtla/nixtla/releases/v0.6.4)
</Accordion>
<Accordion title="Version 0.6.2">
### Feature Enhancements
<Info>
**Fine-tune depth**
Specify the fine-tuning depth through the `finetune_depth` parameter in
`forecast` and `cross_validation`.
</Info>
[See full changelog here](https://github.com/Nixtla/nixtla/releases/v0.6.2)
</Accordion>
<Accordion title="Version 0.6.0">
### Feature Enhancements
<CardGroup>
<Card title="V2 API endpoints">
The client now uses V2 API endpoints, providing lower latency.
</Card>
<Card title="orjson serialization">
Payload serialization now uses [orjson](https://github.com/ijl/orjson) for performance improvements, especially with exogenous features.
</Card>
<Card title="Historical exogenous features">
Historical exogenous features (`hist_exog_list`) are supported in the `forecast` method.
</Card>
<Card title="Feature contributions">
Activate feature contributions by setting:
```python Feature Contributions Example
feature_contributions = True
```
in the `forecast` method.
</Card>
</CardGroup>
[See full changelog here](https://github.com/Nixtla/nixtla/releases/v0.6.0)
</Accordion>
<Accordion title="Version 0.5.0">
### Feature Enhancements
<Info>
**Cross validation endpoint**
The `cross_validation` method now performs a single API call instead of individual calls per window.
</Info>
[See full changelog here](https://github.com/Nixtla/nixtla/releases/v0.5.0)
</Accordion>
<Accordion title="Version 0.4.0">
### Changes & Deprecations
<Warning>
**Important:**
The `nixtlats` package has been deprecated in favor of the `nixtla` package.
</Warning>
[See full changelog here](https://github.com/Nixtla/nixtla/releases/v0.4.0)
</Accordion>
<Accordion title="Version 0.3.0">
### Changes & Deprecations
<Warning>
**Deprecation of `TimeGPT` class**
Replace `TimeGPT` with `NixtlaClient`. Also note:
- Parameters renamed: `token` → `api_key`, `environment` → `base_url`
- Method renamed: `validate_token` → `validate_api_key`
- Update environment variables to match new parameter names.
</Warning>
[See full changelog here](https://github.com/Nixtla/nixtla/releases/v0.3.0)
</Accordion>
<Accordion title="Version 0.2.0 (Previously Released)">
### Changes & Deprecations
<Info>
Renamed fine-tuning parameter:
</Info>
- From `finetune_steps` to `fewshot_steps`
(This change was later reverted for compatibility reasons).
[See full changelog here](https://github.com/Nixtla/nixtla/releases/v0.2.0)
</Accordion>
<Accordion title="Version 0.1.21">
### Feature Enhancements
<Info>
Quantile forecasts added to:
</Info>
- `forecast`
- `cross_validation`
[See full changelog here](https://github.com/Nixtla/nixtla/releases/v0.1.21)
</Accordion>
<Accordion title="Version 0.1.20">
### Feature Enhancements
<Info>
Enhanced fine-tuning capability with new parameters:
</Info>
- `finetune_loss`
- `finetune_steps`
[See full changelog here](https://github.com/Nixtla/nixtla/releases/v0.1.20)
</Accordion>
<Accordion title="Version 0.1.19">
### Feature Enhancements
<Info>
Implemented `num_partitions` parameter for improved resource optimization.
</Info>
[See full changelog here](https://github.com/Nixtla/nixtla/releases/v0.1.19)
</Accordion>
<Accordion title="Version 0.1.18">
### Feature Enhancements
<CardGroup>
<Card title="New model: `timegpt-1-long-horizon`">
Support for longer forecast horizons with our specialized model.
</Card>
<Card title="Cross-validation support for multiple windows">
Evaluate model performance across different time periods.
</Card>
<Card title="Improved retry behavior">
Using parameters:
- `max_retries`
- `retry_interval`
- `max_wait_time`
</Card>
<Card title="Environment tokens">
Environment tokens are now handled automatically.
</Card>
<Card title="Introduced a [FAQ section](https://docs.nixtla.io/docs/faqs)">
Common questions and answers to help you get started.
</Card>
</CardGroup>
[See full changelog here](https://github.com/Nixtla/nixtla/releases/v0.1.18)
</Accordion>
</AccordionGroup>
\ No newline at end of file
---
title: "Glossary"
description: "Key terminology and concepts for time series forecasting with TimeGPT"
icon: "book-open"
---
<Info>
Below are key concepts related to time series forecasting, designed to help you understand and harness the capabilities of TimeGPT. Click any section to expand and learn more.
</Info>
<AccordionGroup>
<Accordion title="Time Series">
A time series is a sequence of data points indexed by time, used to model phenomena that change over intervals (e.g., stock prices, temperature measurements, or product sales).
Time series data often includes:
- **Trend:** The long-term upward or downward direction of the data.
- **Seasonality:** A recurring behavior with a known frequency (e.g., daily, weekly, yearly).
- **Remainder:** Random noise or residual effects after accounting for trend and seasonality.
</Accordion>
<Accordion title="Forecasting">
Forecasting predicts future values of a time series based on historical data. It plays a vital role in decision-making across many industries, including finance, healthcare, retail, and economics.
Use cases vary from simple to advanced methods, such as:
<Tabs>
<Tab title="Model Approaches">
- **Univariate models:** Use a single variable to predict future values.
- **Multivariate models:** Incorporate multiple variables to generate forecasts.
- **Local models:** Estimate parameters independently for each series.
- **Global models:** Estimate parameters jointly across multiple series.
</Tab>
<Tab title="Forecast Types">
- **Point forecasts:** Provide single-value predictions.
- **Probabilistic forecasts:** Express forecasts as probability distributions to capture uncertainty.
</Tab>
</Tabs>
</Accordion>
<Accordion title="Foundation Model">
A foundation model is a large, pre-trained model adaptable across multiple tasksincluding forecasting. Popularized in natural language processing and computer vision, foundation models now also serve sequential data such as time series. They are trained on extensive datasets to capture general patterns, and can be adapted for specific tasks with fine-tuning.
</Accordion>
<Accordion title="TimeGPT">
TimeGPT is the first foundation model built specifically for time series forecasting, developed by Nixtla. Trained on billions of observations across diverse, publicly available datasets, TimeGPT:
- Produces accurate forecasts from new time series data without immediate additional training.
- Sequentially reads "tokens" of historic data from left to right to predict future values.
</Accordion>
<Accordion title="Tokens">
Tokens in TimeGPT are small sequential segments of time series data. This is analogous to NLP tokens (words or characters), but for time series data points. By reading tokens one by one, TimeGPT uncovers complex dependencies in the sequence.
</Accordion>
<Accordion title="Fine-tuning">
Fine-tuning adapts the pre-trained TimeGPT model to a specific dataset or task by performing additional training. While TimeGPT can already forecast in a zero-shot fashion (no extra data required), fine-tuning with your custom dataset often boosts accuracy.
<Info>
**Learn more:**[How to fine-tune TimeGPT](https://docs.nixtla.io/docs/tutorials-fine_tuning)
</Info>
```python fine-tuning-example
# Example: Fine-tuning TimeGPT
from nixtla import TimeGPT
# Load TimeGPT
model = TimeGPT()
# Assume we have a custom dataset called 'my_time_series_data'
# Fine-tune the model on this dataset (pseudo-code)
model.fine_tune(data=my_time_series_data, epochs=5)
# The model is now adapted to our dataset
forecast = model.predict(my_time_series_data)
print(forecast)
```
</Accordion>
<Accordion title="Historical Forecasts">
Historical forecasts (also called in-sample forecasts) are predictions made on previously observed data to evaluate a model's accuracy. By comparing these predictions to the actual values, you can assess how well your model performs on known data.
<Info>
**Learn more:**[Making historical forecasts with TimeGPT](https://docs.nixtla.io/docs/tutorials-historical_forecast)
</Info>
</Accordion>
<Accordion title="Anomaly Detection">
Anomaly detection identifies points in a time series that differ significantly from typical behavior. These anomalies may stem from data collection errors, abrupt changes in underlying patterns, or external events. They can distort forecasts by obscuring trends or seasonal patterns.
Common uses include:
- Fraud detection in finance
- Performance monitoring for digital services
- Spotting unexpected trends in energy consumption
<Info>
**Learn more:**[Detect anomalies with TimeGPT](https://docs.nixtla.io/docs/capabilities-anomaly-detection-anomaly_detection)
</Info>
</Accordion>
<Accordion title="Time Series Cross Validation">
Time series cross-validation assesses how well a forecasting model performs by repeatedly training on historical data and testing on the next time segment. Unlike standard cross-validation, it respects the time order and avoids data leakage.
<Steps>
<Step>Partition your time-based dataset into multiple segments.</Step>
<Step>Train the model on an earlier segment.</Step>
<Step>Forecast the subsequent segment.</Step>
<Step>Compare predictions to actual values.</Step>
<Step>Slide the window forward and repeat.</Step>
</Steps>
<Info>
**Learn more:**[Performing cross-validation with TimeGPT](https://docs.nixtla.io/docs/tutorials-cross_validation)
</Info>
</Accordion>
<Accordion title="Exogenous Variables">
Exogenous variables are external factors that influence a target time series but are not driven by the series itself (e.g., holidays or weather conditions). Including these variables in a forecast can improve accuracy by capturing additional context.
<Info>
**Learn more:**[Including exogenous variables in TimeGPT](https://docs.nixtla.io/docs/tutorials-exogenous_variables)
</Info>
</Accordion>
</AccordionGroup>
\ No newline at end of file
---
title: "TimeGPT Quickstart (Polars)"
description: "Get started with TimeGPT using Polars for efficient data processing."
icon: "bolt-lightning"
---
<Info>
TimeGPT is a production-ready, generative pretrained transformer for time series. It can make accurate predictions in just a few lines of code across domains like retail, electricity, finance, and IoT.
</Info>
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Nixtla/nixtla/blob/main/nbs/docs/getting-started/21_polars_quickstart.ipynb)
<Steps>
<Step title="Create a TimeGPT Account and Generate an API Key">
1. Visit [dashboard.nixtla.io](https://dashboard.nixtla.io/)
2. Sign in with Google, GitHub, or email
3. Select **API Keys** in the menu, then click **Create New API Key**
4. Copy your generated API key using the provided button
<Frame caption="TimeGPT dashboard with API key management">
![TimeGPT dashboard with API key management](https://github.com/Nixtla/nixtla/blob/main/nbs/img/dashboard.png?raw=true)
</Frame>
</Step>
<Step title="Install Nixtla">
```bash install-nixtla
pip install nixtla
```
</Step>
<Step title="Import and Validate Your Nixtla Client">
```python client-setup
from nixtla import NixtlaClient
# Instantiate the NixtlaClient
nixtla_client = NixtlaClient(
api_key='my_api_key_provided_by_nixtla'
)
# Validate the API key
nixtla_client.validate_api_key()
```
<Warning>
For enhanced security, check [Setting Up your API Key](https://docs.nixtla.io/docs/getting-started-setting_up_your_api_key).
</Warning>
</Step>
<Step title="Make Forecasts with Polars">
<AccordionGroup>
<Accordion title="1. Load and Preview the Dataset">
<Info>
We use the **AirPassengers** dataset, containing monthly airline passenger totals from 1949 to 1960. This dataset is a classic example for time series forecasting.
</Info>
```python load-airpassengers-data
import polars as pl
df = pl.read_csv(
'https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv',
try_parse_dates=True,
)
df.head()
```
| timestamp | value |
| ------------ | ------- |
| 1949-01-01 | 112 |
| 1949-02-01 | 118 |
| 1949-03-01 | 132 |
| 1949-04-01 | 129 |
| 1949-05-01 | 121 |
<Info>
**Plot the dataset** for a quick visual inspection:
</Info>
```python plot-airpassengers-data
nixtla_client.plot(df, time_col='timestamp', target_col='value')
```
<Frame caption="Monthly airline passengers from 1949–1960">
![Monthly airline passengers from 1949–1960](https://raw.githubusercontent.com/Nixtla/nixtla/readme_docs/nbs/_docs/docs/getting-started/21_polars_quickstart_files/figure-markdown_strict/cell-13-output-1.png)
</Frame>
</Accordion>
<Accordion title="2. Data Requirements">
<Info>
- The target variable column should not contain missing or non-numeric values.
- Ensure there are no gaps in the timestamps.
- The time column must be of type [Date](https://docs.pola.rs/api/python/stable/reference/api/polars.datatypes.Date.html) or [Datetime](https://docs.pola.rs/api/python/stable/reference/api/polars.datatypes.Datetime.html).
For comprehensive details, visit [Data Requirements](https://docs.nixtla.io/docs/getting-started-data_requirements).
</Info>
</Accordion>
<Accordion title="3. Generate a 12-Month Forecast">
```python forecast-timegpt-12-months
timegpt_fcst_df = nixtla_client.forecast(
df=df,
h=12,
freq='1mo',
time_col='timestamp',
target_col='value'
)
timegpt_fcst_df.head()
```
<Info>
Forecast values for the next 12 months:
</Info>
| timestamp | TimeGPT |
| ------------ | ------------ |
| 1961-01-01 | 437.837921 |
| 1961-02-01 | 426.062714 |
| 1961-03-01 | 463.116547 |
| 1961-04-01 | 478.244507 |
| 1961-05-01 | 505.646484 |
<Info>
Plot the 12-month forecast alongside the actual data:
</Info>
```python plot-timegpt-12-months
nixtla_client.plot(df, timegpt_fcst_df, time_col='timestamp', target_col='value')
```
<Frame caption="Comparison of forecast and actual data (12 months)">
![Comparison of forecast and actual data (12 months)](https://raw.githubusercontent.com/Nixtla/nixtla/readme_docs/nbs/_docs/docs/getting-started/21_polars_quickstart_files/figure-markdown_strict/cell-15-output-1.png)
</Frame>
</Accordion>
<Accordion title="4. Forecast Longer Horizons (36 Months)">
<Warning>
When requesting `h` (horizon) values larger than the models maximum, you may see a warning.
</Warning>
```python forecast-timegpt-36-months
timegpt_fcst_df = nixtla_client.forecast(
df=df,
h=36,
time_col='timestamp',
target_col='value',
freq='1mo',
model='timegpt-1-long-horizon'
)
timegpt_fcst_df.head()
```
<Info>
Plot the 36-month forecast results:
</Info>
```python plot-timegpt-36-months
nixtla_client.plot(df, timegpt_fcst_df, time_col='timestamp', target_col='value')
```
<Frame caption="36-month forecast">
![36-month forecast](https://raw.githubusercontent.com/Nixtla/nixtla/readme_docs/nbs/_docs/docs/getting-started/21_polars_quickstart_files/figure-markdown_strict/cell-17-output-1.png)
</Frame>
</Accordion>
<Accordion title="5. Generate a Shorter Forecast (6 Months)">
```python forecast-timegpt-6-months
timegpt_fcst_df = nixtla_client.forecast(
df=df,
h=6,
time_col='timestamp',
target_col='value',
freq='1mo'
)
nixtla_client.plot(df, timegpt_fcst_df, time_col='timestamp', target_col='value')
```
<Frame caption="6-month forecast">
![6-month forecast](https://raw.githubusercontent.com/Nixtla/nixtla/readme_docs/nbs/_docs/docs/getting-started/21_polars_quickstart_files/figure-markdown_strict/cell-18-output-2.png)
</Frame>
</Accordion>
</AccordionGroup>
</Step>
</Steps>
<CardGroup cols={1}>
<Card title="Key Takeaways">
- TimeGPT can forecast short to long horizons easily.
- Minimal setup is required—just an API key and your dataset!
- Data validation helps ensure accurate forecasts.
</Card>
</CardGroup>
<Check>
You are now ready to harness TimeGPT for quick and reliable time series forecasting using Polars!
</Check>
\ No newline at end of file
---
title: Introduction
description: "Welcome to the home of your new documentation"
---
## Setting up
The first step to world-class documentation is setting up your editing environments.
<CardGroup cols={2}>
<Card
title="Edit Your Docs"
icon="pen-to-square"
href="https://mintlify.com/docs/quickstart"
>
Get your docs set up locally for easy development
</Card>
<Card
title="Preview Changes"
icon="image"
href="https://mintlify.com/docs/development"
>
Preview your changes before you push to make sure they're perfect
</Card>
</CardGroup>
## Make it yours
Update your docs to your brand and add valuable content for the best user conversion.
<CardGroup cols={2}>
<Card
title="Customize Style"
icon="palette"
href="https://mintlify.com/docs/settings/global"
>
Customize your docs to your company's colors and brands
</Card>
<Card
title="Reference APIs"
icon="code"
href="https://mintlify.com/docs/api-playground/openapi"
>
Automatically generate endpoints from an OpenAPI spec
</Card>
<Card
title="Add Components"
icon="screwdriver-wrench"
href="https://mintlify.com/docs/content/components/accordions"
>
Build interactive features and designs to guide your users
</Card>
<Card
title="Get Inspiration"
icon="stars"
href="https://mintlify.com/customers"
>
Check out our showcase of our favorite documentation
</Card>
</CardGroup>
---
title: 'TimeGPT Quickstart'
description: 'Start building awesome documentation in under 5 minutes'
---
## Setup your development
Learn how to update your docs locally and deploy them to the public.
### Edit and preview
<AccordionGroup>
<Accordion icon="github" title="Clone your docs locally">
During the onboarding process, we created a repository on your Github with
your docs content. You can find this repository on our
[dashboard](https://dashboard.mintlify.com). To clone the repository
locally, follow these
[instructions](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository)
in your terminal.
</Accordion>
<Accordion icon="rectangle-terminal" title="Preview changes">
Previewing helps you make sure your changes look as intended. We built a
command line interface to render these changes locally.
1. Install the
[Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the
documentation changes locally with this command: ``` npm i -g mintlify ```
2. Run the following command at the root of your documentation (where
`docs.json` is): ``` mintlify dev ```
<Note>
If you’re currently using the legacy ```mint.json``` configuration file, please update the Mintlify CLI:
```npm i -g mintlify@latest```
And run the new upgrade command in your docs repository:
```mintlify upgrade```
You should now be using the new ```docs.json``` configuration file. Feel free to delete the ```mint.json``` file from your repository.
</Note>
</Accordion>
</AccordionGroup>
### Deploy your changes
<AccordionGroup>
<Accordion icon="message-bot" title="Install our Github app">
Our Github app automatically deploys your changes to your docs site, so you
don't need to manage deployments yourself. You can find the link to install on
your [dashboard](https://dashboard.mintlify.com). Once the bot has been
successfully installed, there should be a check mark next to the commit hash
of the repo.
</Accordion>
<Accordion icon="rocket" title="Push your changes">
[Commit and push your changes to
Git](https://docs.github.com/en/get-started/using-git/pushing-commits-to-a-remote-repository#about-git-push)
for your changes to update in your docs site. If you push and don't see that
the Github app successfully deployed your changes, you can also manually
update your docs through our [dashboard](https://dashboard.mintlify.com).
</Accordion>
</AccordionGroup>
## Update your docs
Add content directly in your files with MDX syntax and React components. You can use any of our components, or even build your own.
---
title: "Add Confidence Levels"
description: "Learn how to configure confidence levels to control anomaly detection sensitivity."
icon: "percent"
---
## Confidence Level in Anomaly Detection
The confidence level is used to determine the threshold for anomaly detection. By default, any values that lie outside the 99% confidence interval are labeled as anomalies.
Use the `level` parameter (0-100) to control how many anomalies are detected.
- Increasing the `level`(closer to 100) decreases the number of anomalies.
- Decreasing the `level`(closer to 0) increases the number of anomalies.
## How to Add Confidence Levels
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Nixtla/nixtla/blob/main/nbs/docs/capabilities/historical-anomaly-detection/04_confidence_levels.ipynb)
### Step 1: Set Up Data and Client
Follow the steps in the historical anomaly detection tutorial to set up the data and client.
### Step 2: Detect Anomalies with a Confidence Level
To detect anomalies with a confidence level, use the `level` parameter. In this example, we'll use a 70% confidence level.
```python
# Anomaly detection using a 70% confidence interval
anomalies_df = nixtla_client.detect_anomalies(
df,
freq='D',
level=70
)
```
### Step 3: Visualize Results
To visualize the results, use the `plot` method:
```python
nixtla_client.plot(df, anomalies_df)
```
<Frame caption="Anomalies detected with a 70% confidence interval">
![Anomalies detected with a 70% confidence interval](https://raw.githubusercontent.com/Nixtla/nixtla/readme_docs/nbs/_docs/docs/capabilities/historical-anomaly-detection/04_confidence_levels_files/figure-markdown_strict/cell-10-output-2.png)
</Frame>
This plot highlights points that fall outside the 70% confidence interval, indicating they are considered anomalies. Points within the interval are considered normal behavior.
\ No newline at end of file
---
title: "Add Date Features"
description: "Learn how to enrich datasets with date features for historical anomaly detection."
icon: "calendar"
---
## Why Add Date Features?
Date features help the model recognize seasonal patterns, holiday effects, or recurring fluctuations. Examples include `day_of_week`, `month`, `year`, and more.
Adding date features is a powerful way to enrich your dataset when no exogenous variables are available. These features help guide the historical anomaly detection model in recognizing seasonal and temporal patterns.
## How to Add Date Features
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Nixtla/nixtla/blob/main/nbs/docs/tutorials/20_anomaly_detection.ipynb)
### Step 1: Set Up Data and Client
Follow the steps in the historical anomaly detection tutorial to set up the data and client.
### Step 2: Add Date Features for Anomaly Detection
To add date features, use the `date_features` parameter. You can enable all possible features by setting `date_features=True`, or specify certain features to focus on:
```python
# Add date features at the month and year levels
anomalies_df_x = nixtla_client.detect_anomalies(
df,
freq='D',
date_features=['month', 'year'],
date_features_to_one_hot=True,
level=99.99,
)
```
This code extracts monthly and yearly patterns, then converts them to one-hot-encoded features, creating multiple exogenous variables for the anomaly detection model.
### Step 3: Review Output Logs
When you run the detection, logs inform you about which exogenous features were used:
```bash
INFO:nixtla.nixtla_client:Validating inputs...
INFO:nixtla.nixtla_client:Preprocessing dataframes...
INFO:nixtla.nixtla_client:Querying model metadata...
INFO:nixtla.nixtla_client:Using the following exogenous features: [
'month_1.0', 'month_2.0', ...
'year_2007.0', 'year_2008.0', ...
]
INFO:nixtla.nixtla_client:Calling Anomaly Detector Endpoint...
```
This output shows which date features were used in the anomaly detection process.
### Step 4: Visualize Anomalies
To visualize the anomalies, use the `plot` method:
```python
nixtla_client.plot(df, anomalies_df_x)
```
<Frame caption="Anomaly plot showing flagged points based on date features.">
![Date features anomalies
plot](https://raw.githubusercontent.com/Nixtla/nixtla/readme_docs/nbs/_docs/docs/capabilities/historical-anomaly-detection/03_anomaly_detection_date_features_files/figure-markdown_strict/cell-10-output-2.png)
</Frame>
To see the weight contributions of the date features, use the `weights_x` method:
```python
nixtla_client.weights_x.plot.barh(
x='features',
y='weights'
)
```
<Frame caption="Bar chart indicating which date features contribute most to anomaly detection.">
![Weights
plot](https://raw.githubusercontent.com/Nixtla/nixtla/readme_docs/nbs/_docs/docs/capabilities/historical-anomaly-detection/03_anomaly_detection_date_features_files/figure-markdown_strict/cell-11-output-1.png)
</Frame>
---
title: "Special topics"
description: "Explore special topics in TimeGPT including irregular timestamps, bounded forecasts, hierarchical forecasts, missing values, and improving forecast accuracy."
icon: "gear"
---
# Special Topics in TimeGPT
<Info>
**TimeGPT** is a robust foundation model for time series forecasting. It provides advanced capabilities, including hierarchical and bounded forecasts. Certain special situations require specific considerations, such as handling irregular timestamps or datasets containing missing values, to leverage the full potential of **TimeGPT**.
</Info>
In this section, we cover these special topics to help you get the most out of TimeGPT:
## Overview of Special Topics
<CardGroup cols={3}>
<Card
title="Irregular Timestamps"
href="https://docs.nixtla.io/docs/capabilities-forecast-irregular_timestamps"
>
Learn how to manage irregular timestamps effectively to ensure correct utilization of TimeGPT.
</Card>
<Card
title="Bounded Forecasts"
href="https://docs.nixtla.io/docs/tutorials-bounded_forecasts"
>
Explore how to generate forecasts within defined limits using TimeGPT, ideal for bounded-outcome scenarios.
</Card>
<Card
title="Hierarchical Forecasts"
href="https://docs.nixtla.io/docs/tutorials-hierarchical_forecasting"
>
Understand how to perform coherent forecasts at multiple aggregation levels using TimeGPT.
</Card>
<Card
title="Missing Values"
href="https://docs.nixtla.io/docs/tutorials-missing_values"
>
Learn effective strategies for handling missing data points in time series when using TimeGPT.
</Card>
<Card
title="Improve Forecast Accuracy"
href="https://docs.nixtla.io/docs/tutorials-improve_forecast_accuracy_with_timegpt"
>
Discover techniques to enhance forecasting accuracy when working with TimeGPT.
</Card>
</CardGroup>
## Getting Started with Special Topics
Sometimes, the best way to integrate special features in TimeGPT is by following a series of clear, sequential steps. Below is a simplified workflow to guide you:
<Steps>
<Step title="Step 1: Identify the Special Topic">
Determine the challenge you are addressing (e.g., irregular timestamps, bounded forecasts, hierarchical forecasts, handling missing values, or improving accuracy).
</Step>
<Step title="Step 2: Prepare Your Data">
Align your time series data with the requirements of the specific topic.
<Info>
For instance, if timestamps are irregular, you might need to resample or align data before passing it to TimeGPT.
</Info>
</Step>
<Step title="Step 3: Configure TimeGPT">
Modify your forecasts to accommodate the special topic. For example, set upper and lower bounds for bounded forecasts.
<CodeGroup>
```python Configuring TimeGPT for Bounded Forecasting
# Example: Configuring TimeGPT for bounded forecasting
from timegpt import TimeGPT
timegpt_model = TimeGPT(
lower_bound=0,
upper_bound=100 # Example bounds
)
# Fit the model (pseudo-code)
timegpt_model.fit(training_data)
# Make a forecast with the specified bounds
forecast = timegpt_model.predict(future_data)
print(forecast)
```
</CodeGroup>
</Step>
<Step title="Step 4: Monitor and Evaluate Forecasts">
Use appropriate evaluation metrics to ensure the forecasts meet your accuracy requirements. Adjust parameters or data preprocessing steps as needed.
</Step>
<Step title="Step 5: Iterate and Improve">
Incorporate feedback from real-world usage to refine your approach. Revisit the documentation for each specific topic and apply best practices.
</Step>
</Steps>
<AccordionGroup>
<Accordion title="Need More Guidance?">
Refer to the linked tutorials in the **Overview of Special Topics** section for deeper insights on each specialized area.
</Accordion>
</AccordionGroup>
<Check>
With a careful approach to preparing data and configuring **TimeGPT** for these special scenarios, you can unlock superior forecasting performance for a wide range of real-world applications.
</Check>
\ No newline at end of file
---
title: "Multiple Series Forecasting Tutorial"
description: "Learn how to generate forecasts for multiple time series simultaneously."
icon: "layer-group"
---
# Multiple Series Forecasting
<Info>
TimeGPT provides straightforward multi-series forecasting. This approach enables you to forecast several time series concurrently rather than focusing on just one.
</Info>
<Check>
• Forecasts are univariate: TimeGPT does not directly account for interactions between target variables in different series.
• Exogenous Features: You can still include additional explanatory (exogenous) variables like categories, numeric columns, holidays, or special events to enrich the model.
</Check>
Given these capabilities, TimeGPT can be fine-tuned to your own datasets for precise and efficient forecasting. Below, let's see how to use multiple series forecasting in practice:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Nixtla/nixtla/blob/main/nbs/docs/tutorials/05_multiple_series.ipynb)
<CardGroup cols={2}>
<Card title="Key Concept" icon="lightbulb">
Global models like TimeGPT can handle multiple series in a single training session and produce a separate forecast for each.
</Card>
<Card title="Benefit" icon="check">
Multi-series learning improves efficiency, leveraging shared patterns across series that often lead to better forecasts.
</Card>
</CardGroup>
<Steps>
<Steps title="1. Install and import packages">
Install and import the required libraries, then initialize the Nixtla client.
```python Nixtla Client Initialization
import pandas as pd
from nixtla import NixtlaClient
nixtla_client = NixtlaClient(
api_key='my_api_key_provided_by_nixtla'
)
```
<AccordionGroup>
<Accordion title="Using an Azure AI Endpoint">
To use Azure AI endpoints, specify the `base_url` parameter:
```python Azure AI Endpoint Setup
nixtla_client = NixtlaClient(
base_url="your azure ai endpoint",
api_key="your api_key"
)
```
</Accordion>
</AccordionGroup>
</Steps>
<Steps title="2. Load the data">
You can now load the electricity prices dataset from various European markets. TimeGPT automatically treats it as multiple series based on the `unique_id` column.
```python Load Electricity Dataset
df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/electricity-short.csv')
df.head()
```
<AccordionGroup>
<Accordion title="Dataset Preview">
| | unique_id | ds | y |
| ----- | ----------- | --------------------- | ------- |
| 0 | BE | 2016-12-01 00:00:00 | 72.00 |
| 1 | BE | 2016-12-01 01:00:00 | 65.80 |
| 2 | BE | 2016-12-01 02:00:00 | 59.99 |
| 3 | BE | 2016-12-01 03:00:00 | 50.69 |
| 4 | BE | 2016-12-01 04:00:00 | 52.58 |
</Accordion>
</AccordionGroup>
Now, let's visualize the data using the `NixtlaClient.plot()` method.
```python Plot Electricity Series
nixtla_client.plot(df)
```
<Frame caption="Electricity Markets Series Plot">
![Electricity Markets Series Plot](https://raw.githubusercontent.com/Nixtla/nixtla/readme_docs/nbs/_docs/docs/tutorials/05_multiple_series_files/figure-markdown_strict/cell-11-output-1.png)
</Frame>
</Steps>
<Steps title="3. Forecast multiple series">
Pass the DataFrame to the `forecast()` method. TimeGPT automatically handles each unique series based on `unique_id`.
```python Forecast Multiple Series
timegpt_fcst_multiseries_df = nixtla_client.forecast(
df=df,
h=24,
level=[80, 90]
)
timegpt_fcst_multiseries_df.head()
```
<AccordionGroup>
<Accordion title="Model Execution Logs">
```bash Forecast Model Logs
INFO:nixtla.nixtla_client:Validating inputs...
INFO:nixtla.nixtla_client:Preprocessing dataframes...
INFO:nixtla.nixtla_client:Inferred freq: H
INFO:nixtla.nixtla_client:Restricting input...
INFO:nixtla.nixtla_client:Calling Forecast Endpoint...
```
</Accordion>
<Accordion title="Forecast Preview">
| | unique_id | ds | TimeGPT | TimeGPT-lo-90 | TimeGPT-lo-80 | TimeGPT-hi-80 | TimeGPT-hi-90 |
| ----- | ----------- | --------------------- | ----------- | --------------- | --------------- | --------------- | --------------- |
| 0 | BE | 2016-12-31 00:00:00 | 46.151176 | 36.660478 | 38.337019 | 53.965334 | 55.641875 |
| 1 | BE | 2016-12-31 01:00:00 | 42.426598 | 31.602231 | 33.976724 | 50.876471 | 53.250964 |
| 2 | BE | 2016-12-31 02:00:00 | 40.242889 | 30.439970 | 33.634985 | 46.850794 | 50.045809 |
| 3 | BE | 2016-12-31 03:00:00 | 38.265339 | 26.841481 | 31.022093 | 45.508585 | 49.689197 |
| 4 | BE | 2016-12-31 04:00:00 | 36.618801 | 18.541384 | 27.981346 | 45.256256 | 54.696218 |
</Accordion>
</AccordionGroup>
<Info>
When using Azure endpoints, specify `model="azureai"`. By default, the `timegpt-1` model is used. See the
[details here](https://docs.nixtla.io/docs/tutorials-long_horizon_forecasting) for available models.
</Info>
Visualize the forecasts:
```python Plot Forecasts
nixtla_client.plot(
df,
timegpt_fcst_multiseries_df,
max_insample_length=365,
level=[80, 90]
)
```
<Frame caption="Multiple Series Forecast Plot">
![Forecast Plot](https://raw.githubusercontent.com/Nixtla/nixtla/readme_docs/nbs/_docs/docs/tutorials/05_multiple_series_files/figure-markdown_strict/cell-13-output-1.png)
</Frame>
</Steps>
<Steps title="4. Generate historical forecasts">
You can also produce historical forecasts (including prediction intervals) by setting `add_history=True`. This allows you to compare previously observed values with model predictions.
```python Historical Forecasts with Prediction Intervals
historical_fcst_df = nixtla_client.forecast(
df=df,
h=24,
level=[80, 90],
add_history=True
)
historical_fcst_df.head()
```
</Steps>
</Steps>
<Check>
Congratulations! You have successfully performed multi-series forecasting with TimeGPT, taking advantage of its global modeling approach.
</Check>
\ No newline at end of file
---
title: "Training"
description: "Tutorials and steps for training TimeGPT for various forecasting scenarios"
icon: "gear"
---
## Overview
This section provides tutorials about training **TimeGPT** under specific conditions. Learn how to extend predictions across multiple time series and over long horizons with ease.
<Info>
TimeGPT is designed to handle time series forecasting tasks of varying complexities. The tutorials below will guide you through key strategies for effective training and deployment.
</Info>
---
## Quick Start: General Training Steps
Below is a concise overview of how to start training with **TimeGPT**.
<Steps>
<Step title="Prepare Your Data">
Ensure your time series data is clean, properly formatted, and includes all necessary features (e.g., timestamps, values, external variables).
</Step>
<Step title="Select Your Forecasting Approach">
Decide whether you need a single series or multi-series approach, and whether you need short or long horizons.
</Step>
<Step title="Configure TimeGPT">
Set up the relevant hyperparameters for your forecasting needs (window size, horizon, seasonalities, etc.).
</Step>
<Step title="Train and Evaluate">
Train the model on your dataset and evaluate performance with appropriate error metrics (MAPE, RMSE, etc.).
</Step>
<Step title="Refine and Deploy">
Use performance insights to refine your model, then deploy it in a production environment.
</Step>
</Steps>
---
## Tutorials
<AccordionGroup>
<Accordion title="Long Horizon Forecasting">
Learn how to make predictions beyond two seasonal periods or further into the future using the specialized long-horizon forecasting model of **TimeGPT**.
<br />
<br />
<Info>
For forecasting horizons that exceed two seasonal periods, you may need additional computational resources and careful hyperparameter tuning.
</Info>
<br />
<CardGroup cols={2}>
<Card title="Long Horizon Forecasting Guide" href="https://docs.nixtla.io/docs/tutorials-long_horizon_forecasting" cta="View Tutorial">
Discover the steps to train and optimize TimeGPTs long-horizon capabilities.
</Card>
</CardGroup>
</Accordion>
<Accordion title="Multiple Series Forecasting">
Learn how to forecast multiple time series simultaneously using **TimeGPT**.
<br />
<br />
<Info>
Forecasting multiple time series can help you leverage shared patterns and reduce overall computational overhead.
</Info>
<br />
<CardGroup cols={2}>
<Card title="Multiple Series Forecasting Guide" href="https://docs.nixtla.io/docs/tutorials-multiple_series_forecasting" cta="View Tutorial">
Forecast numerous time series at once, streamlining your workflow for complex projects.
</Card>
</CardGroup>
</Accordion>
</AccordionGroup>
---
## Example Training Code
Below is a simplified example of how you might train **TimeGPT** in Python. Adjust hyperparameters as needed for your specific use case.
<CodeGroup>
```python Example: Training TimeGPT Multi-Series
# Example: Training TimeGPT for a multi-series scenario
import timegpt
# Load your dataset
data = timegpt.load_data("my_dataset.csv")
# Initialize the model
model = timegpt.TimeGPT(
horizon=30, # Forecast horizon
max_epochs=50, # Number of training epochs
seasonality=24, # Seasonal period
learning_rate=1e-3
)
# Train the model
model.fit(data)
# Generate predictions
predictions = model.predict(data)
# Evaluate results
error = model.evaluate(predictions, data)
print(f"Forecast error: {error:.2f}")
```
</CodeGroup>
---
<Check>
Congratulations\! You now have an overview of how to set up and train TimeGPT for both single and multiple series forecasting as well as for long-horizon use cases.
</Check>
\ No newline at end of file
---
title: "Prediction Intervals"
description: "Learn how to create prediction intervals with TimeGPT"
icon: "chart-area"
---
## What Are Prediction Intervals?
A prediction interval provides a range where a future observation of a time series is expected to fall, with a specific level of probability.
For example, a 95% prediction interval means that the true future value is expected to lie within this range 95 times out of 100.
Wider intervals reflect greater uncertainty, while narrower intervals indicate higher confidence in the forecast.
With TimeGPT, you can easily generate prediction intervals for any confidence level between 0% and 100%.
These intervals are constructed using **[conformal prediction](https://en.wikipedia.org/wiki/Conformal_prediction)**, a distribution-free framework for uncertainty quantification.
Prediction intervals differ from confidence intervals:
- **Prediction Intervals**: Capture the uncertainty in future observations.
- **Confidence Intervals**: Quantify the uncertainty in the estimated model parameters (e.g., the mean).
As a result, prediction intervals are typically wider, as they account for both model and data variability.
## How to Generate Prediction Intervals
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Nixtla/nixtla/blob/main/nbs/docs/capabilities/forecast/10_prediction_intervals.ipynb)
### Step 1: Import Packages
Import the required packages and initialize the Nixtla client.
```python
import pandas as pd
from nixtla import NixtlaClient
nixtla_client = NixtlaClient(
api_key='my_api_key_provided_by_nixtla' # defaults to os.environ.get("NIXTLA_API_KEY")
)
```
### Step 2: Load Data
In this tutorial, we will use the Air Passengers dataset.
```python
df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv')
df.head()
```
| | timestamp | value |
| ----- | ------------ | ------- |
| 0 | 1949-01-01 | 112 |
| 1 | 1949-02-01 | 118 |
| 2 | 1949-03-01 | 132 |
| 3 | 1949-04-01 | 129 |
| 4 | 1949-05-01 | 121 |
### Step 3: Forecast with Prediction Intervals
To generate prediction intervals with TimeGPT, provide a list of desired confidence levels using the `level` argument.
Note that accepted values are between 0 and 100.
- Higher confidence levels provide more certainty that the true value will be captured, but result in wider, less precise intervals.
- Lower confidence levels provide less certainty that the true value will be captured, but result in narrower, more precise intervals.
```python
timegpt_fcst_pred_int_df = nixtla_client.forecast(
df=df,
h=12,
level=[80, 90, 99],
time_col='timestamp',
target_col='value',
)
timegpt_fcst_pred_int_df.head()
```
| timestamp | TimeGPT | TimeGPT-hi-80 | TimeGPT-hi-90 | TimeGPT-hi-99 | TimeGPT-lo-80 | TimeGPT-lo-90 | TimeGPT-lo-99 |
|-------------|---------|----------------|----------------|----------------|----------------|----------------|----------------|
| 1961-01-01 | 437.84 | 443.69 | 451.89 | 459.28 | 431.99 | 423.78 | 416.40 |
| 1961-02-01 | 426.06 | 439.42 | 444.43 | 448.94 | 412.70 | 407.70 | 403.19 |
| 1961-03-01 | 463.12 | 488.83 | 495.92 | 502.31 | 437.41 | 430.31 | 423.93 |
| 1961-04-01 | 478.24 | 507.77 | 509.72 | 511.47 | 448.72 | 446.77 | 445.02 |
| 1961-05-01 | 505.65 | 532.89 | 539.32 | 545.12 | 478.41 | 471.97 | 466.18 |
You can visualize the prediction intervals using the `plot` method. To do so, specify the confidence levels to display using the `level` argument.
```python
nixtla_client.plot(
df,
timegpt_fcst_pred_int_df,
time_col='timestamp',
target_col='value',
level=[80, 90, 99]
)
```
<img src="/images/docs/tutorials-uncertainty/prediction_intervals_fc.png"/>
### Step 4: Historical Forecast
You can also generate prediction intervals for historical forecasts by setting `add_history=True`.
```python
timegpt_fcst_pred_int_historical_df = nixtla_client.forecast(
df=df,
h=12,
level=[80, 90],
time_col='timestamp',
target_col='value',
add_history=True,
)
timegpt_fcst_pred_int_historical_df.head()
```
Plot the prediction intervals for the historical forecasts.
```python
nixtla_client.plot(
df,
timegpt_fcst_pred_int_historical_df,
time_col='timestamp',
target_col='value',
level=[80,90,99]
)
```
<img src="/images/docs/tutorials-uncertainty/prediction_intervals_historical.png"/>
### Step 5. Cross-Validation
You can use the `cross_validation` method to generate prediction intervals for each time window.
```python
cv_df = nixtla_client.cross_validation(
df=df,
h=12,
n_windows=4,
level=[80, 90, 99],
time_col='timestamp',
target_col='value'
)
cv_df.head()
```
After computing the forecasts, you can visualize the results for each cross-validation cutoff to better understand model performance over time.
```python
cutoffs = cv_df['cutoff'].unique()
for cutoff in cutoffs:
fig = nixtla_client.plot(
df.tail(100),
cv_df.query('cutoff == @cutoff').drop(columns=['cutoff', 'value']),
level=[80,90,99],
time_col='timestamp',
target_col='value',
)
display(fig)
```
<img src="/images/docs/tutorials-uncertainty/prediction_intervals_cv1.png"/>
<img src="/images/docs/tutorials-uncertainty/prediction_intervals_cv2.png"/>
<Check>
Congratulations! You have successfully generated prediction intervals using TimeGPT.
You also visualized historical forecasts with intervals and evaluated their coverage across multiple time windows using cross-validation.
</Check>
\ No newline at end of file
---
title: "Quantile Forecasts"
description: "Learn how to generate quantile forecasts with TimeGPT"
icon: "ruler-vertical"
---
## What Are Quantile Forecasts?
Quantile forecasts correspond to specific percentiles of the forecast distribution and provide a more complete representation of the range of possible outcomes.
- The 0.5 quantile (or 50th percentile) is the median forecast, meaning there is a 50% chance that the actual value will fall below or above this point.
- The 0.1 quantile (or 10th percentile) forecast represents a value that the actual observation is expected to fall below 10% of the time.
- The 0.9 quantile (or 90th percentile) forecast represents a value that the actual observation is expected to fall below 90% of the time.
TimeGPT supports quantile forecasts. In this tutorial, we will show you how to generate them.
## Why Use Quantile Forecasts
- Quantile forecasts can provide information about best and worst-case scenarios, allowing you to make better decisions under uncertainty.
- In many real-world scenarios, being wrong in one direction is more costly than being wrong in the other. Quantile forecasts allow you to focus on the specific percentiles that matter most for your particular use case.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Nixtla/nixtla/blob/main/nbs/docs/tutorials/10_uncertainty_quantification_with_quantile_forecasts.ipynb)
## How to Generate Quantile Forecasts
### Step 1: Import Packages
Import the required packages and initialize a Nixtla client to connect with TimeGPT.
```python
import pandas as pd
from nixtla import NixtlaClient
from IPython.display import display
nixtla_client = NixtlaClient(
api_key='my_api_key_provided_by_nixtla' # Defaults to os.environ.get("NIXTLA_API_KEY")
)
```
### Step 2: Load Data
In this tutorial, we will use the Air Passengers dataset.
```python
df = pd.read_csv(
'https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv'
)
df.head()
```
| | timestamp | value |
| ----- | ------------ | ------- |
| 0 | 1949-01-01 | 112 |
| 1 | 1949-02-01 | 118 |
| 2 | 1949-03-01 | 132 |
| 3 | 1949-04-01 | 129 |
| 4 | 1949-05-01 | 121 |
### Step 3: Forecast with Quantiles
To specify the desired quantiles, you need to pass a list of quantiles to the `quantiles` parameter. Choose quantiles between 0 and 1 based on your uncertainty analysis needs.
```python
quantiles = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
timegpt_quantile_fcst_df = nixtla_client.forecast(
df=df,
h=12,
quantiles=quantiles,
time_col='timestamp',
target_col='value'
)
timegpt_quantile_fcst_df.head()
```
| timestamp | TimeGPT | TimeGPT-q-10 | TimeGPT-q-20 | TimeGPT-q-30 | TimeGPT-q-40 | TimeGPT-q-50 | TimeGPT-q-60 | TimeGPT-q-70 | TimeGPT-q-80 | TimeGPT-q-90 |
|-------------|---------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|
| 1961-01-01 | 437.84 | 431.99 | 435.04 | 435.38 | 436.40 | 437.84 | 439.27 | 440.29 | 440.63 | 443.69 |
| 1961-02-01 | 426.06 | 412.70 | 414.83 | 416.04 | 421.72 | 426.06 | 430.41 | 436.08 | 437.29 | 439.42 |
| 1961-03-01 | 463.12 | 437.41 | 444.23 | 446.42 | 450.71 | 463.12 | 475.53 | 479.81 | 482.00 | 488.82 |
| 1961-04-01 | 478.24 | 448.72 | 455.43 | 465.57 | 469.88 | 478.24 | 486.61 | 490.92 | 501.06 | 507.76 |
| 1961-05-01 | 505.65 | 478.41 | 493.16 | 497.99 | 499.14 | 505.65 | 512.15 | 513.30 | 518.14 | 532.89 |
TimeGPT returns multiple columns in the forecast output:
- Each requested quantile gets its own column named in the format `TimeGPT-q-...`
- The `TimeGPT` column shows the mean forecast
- The mean forecast (`TimeGPT`) is identical to the 0.5 quantile (`TimeGPT-q-50`)
### Step 4: Plot the Quantile Forecasts
To plot the quantile forecasts, you can use the `plot` method.
```python
nixtla_client.plot(
df,
timegpt_quantile_fcst_df,
time_col='timestamp',
target_col='value'
)
```
<img src="/images/docs/tutorials-uncertainty/quantiles_fc.png"/>
The plot displays:
- The actual time series data in blue.
- Multiple forecast intervals represented by different quantiles:
- The 0.5 quantile (50th percentile) represents the median forecast.
- The 0.1 and 0.9 quantiles (10th and 90th percentiles) show the outer bounds of the forecast.
- Additional quantiles (0.2, 0.3, 0.4, 0.6, 0.7, 0.8) are shown in between, creating a gradient of uncertainty.
This type of visualization is particularly useful because it:
- Shows the full distribution of possible outcomes rather than just a single point forecast.
- Helps identify best and worst-case scenarios.
- Allows decision-makers to understand the range of uncertainty in the predictions.
### Step 5: Historical Forecast
You can also use quantile forecasts to forecast historical data by setting the `add_history` parameter to `True`.
```python
timegpt_quantile_fcst_df = nixtla_client.forecast(
df=df,
h=12,
quantiles=quantiles,
time_col='timestamp',
target_col='value',
add_history=True, # Add historical data to the forecast
)
nixtla_client.plot(
df,
timegpt_quantile_fcst_df,
time_col='timestamp',
target_col='value'
)
```
<img src="/images/docs/tutorials-uncertainty/quantiles_historical.png"/>
The plot now includes quantile forecasts for the historical data. This allows you to evaluate how well the quantile forecasts capture the true variability and identify any systematic bias.
### Step 6: Cross-Validation
To evaluate the performance of the quantile forecasts across multiple time windows, you can use the `cross_validation` method.
```python
cv_df = nixtla_client.cross_validation(
df=df,
h=12,
n_windows=4,
quantiles=quantiles,
time_col='timestamp',
target_col='value'
)
```
After computing the forecasts, you can visualize the results for each cross-validation cutoff to better understand model performance over time.
```python
cutoffs = cv_df['cutoff'].unique()
for cutoff in cutoffs:
fig = nixtla_client.plot(
df.tail(100),
cv_df.query('cutoff == @cutoff').drop(columns=['cutoff', 'value']),
time_col='timestamp',
target_col='value'
)
display(fig)
```
<img src="/images/docs/tutorials-uncertainty/quantiles_cv1.png"/>
<img src="/images/docs/tutorials-uncertainty/quantiles_cv2.png"/>
Each plot shows a different cross-validation window (or cutoff) for the time series. This allows you to evaluate how well the predicted intervals capture the true values across multiple, independent forecast windows.
<Check>
Congratulations! You have successfully generated quantile forecasts using TimeGPT. You also visualized historical quantile predictions and evaluated their performance through cross-validation.
</Check>
\ No newline at end of file
---
title: "Uncertainty Quantification with TimeGPT"
description: "Learn how to generate quantile forecasts and prediction intervals to capture uncertainty in your forecasts."
icon: "question"
---
In time series forecasting, it is important to consider the full probability distribution of the predictions rather than a single point estimate. This provides a more accurate representation of the uncertainty around the forecasts and allows better decision-making.
**TimeGPT** supports uncertainty quantification through quantile forecasts and prediction intervals.
## Why Consider the Full Probability Distribution?
When you focus on a single point prediction, you lose valuable information about the range of possible outcomes. By quantifying uncertainty, you can:
- Identify best-case and worst-case scenarios
- Improve risk management and contingency planning
- Gain confidence in decisions that rely on forecast accuracy
---
title: "Historical Forecast Evaluation"
description: "Learn how to validate TimeGPT models by comparing historical forecasts against actual data."
icon: "clock-rotate-left"
---
Our time series model offers a powerful feature that allows you to retrieve historical forecasts alongside prospective predictions. You can access this functionality by using the forecast method and setting `add_history=True`.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Nixtla/nixtla/blob/main/nbs/docs/tutorials/09_historical_forecast.ipynb)
<Info>
Historical forecasts can help you understand how well the model has performed in the past. This view provides insight into the model's predictive accuracy and any patterns in its performance.
</Info>
<Tabs>
<Tab title="Overview">
<CardGroup>
<Card>
<Card title="Key Benefit">
Adding historical forecasts (`add_history=True`) lets you compare model predictions against actual data, helping to identify trends.
</Card>
</Card>
<Card>
<Card title="When to Use Historical Forecasts">
Useful for performance evaluation, model reliability checks, and building trust in the predictions.
</Card>
</Card>
</CardGroup>
</Tab>
</Tabs>
<Steps>
<Step title="1. Import Required Packages">
First, install and import the required packages. Then, initialize the Nixtla client. Replace `my_api_key_provided_by_nixtla` with your actual API key.
```python Import Packages and Initialize NixtlaClient
import pandas as pd
from nixtla import NixtlaClient
```
```python Initialize NixtlaClient with API Key
nixtla_client = NixtlaClient(
# Defaults to os.environ.get("NIXTLA_API_KEY")
api_key='my_api_key_provided_by_nixtla'
)
```
<Check>
**Use an Azure AI endpoint**<br/>
If you want to use an Azure AI endpoint, set the `base_url` argument:
```python Initialize NixtlaClient with Azure AI Endpoint
nixtla_client = NixtlaClient(
base_url="your azure ai endpoint",
api_key="your api_key"
)
```
</Check>
</Step>
<Step title="2. Load the Dataset">
<AccordionGroup>
<Accordion title="Load and Inspect Data">
First, import an example dataset using `pandas`:
```python Load Dataset
df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv')
df.head()
```
| | timestamp | value |
| ----- | ------------ | ------- |
| 0 | 1949-01-01 | 112 |
| 1 | 1949-02-01 | 118 |
| 2 | 1949-03-01 | 132 |
| 3 | 1949-04-01 | 129 |
| 4 | 1949-05-01 | 121 |
<Info>
This dataset contains monthly passenger counts for an airline, starting in January 1949. The `timestamp` column is the time dimension, and `value` is the passenger count.
</Info>
</Accordion>
</AccordionGroup>
You can visualize the dataset using Nixtla's built-in plotting function:
```python Plot Initial Time Series
nixtla_client.plot(df, time_col='timestamp', target_col='value')
```
<Frame caption="Time Series Plot">
![Time Series Plot](https://raw.githubusercontent.com/Nixtla/nixtla/readme_docs/nbs/_docs/docs/tutorials/09_historical_forecast_files/figure-markdown_strict/cell-11-output-1.png)
</Frame>
</Step>
<Step title="3. Generate Historical Forecast">
<AccordionGroup>
<Accordion title="Using add_history=True">
Set `add_history=True` to generate historical fitted values. The returned DataFrame includes future forecasts (`h` steps ahead) and historical predictions.
<Warning>
Historical forecasts are unaffected by `h` and rely on the data frequency. They are generated in a rolling-window manner, building a full series of predictions sequentially.
</Warning>
```python Generate Historical Forecast with add_history
timegpt_fcst_with_history_df = nixtla_client.forecast(
df=df,
h=12,
time_col='timestamp',
target_col='value',
add_history=True,
)
```
<Check>
Below is an example of console output showing the progress and validation steps:
</Check>
<Accordion title="Log Output">
```bash Forecast Process Log
INFO:nixtla.nixtla_client:Validating inputs...
INFO:nixtla.nixtla_client:Preprocessing dataframes...
INFO:nixtla.nixtla_client:Inferred freq: MS
INFO:nixtla.nixtla_client:Calling Forecast Endpoint...
INFO:nixtla.nixtla_client:Calling Historical Forecast Endpoint...
```
</Accordion>
<Info>
**Available models in Azure AI**<br/>
If you use an Azure AI endpoint, specify the model with `model="azureai"`:
```python
nixtla_client.forecast(..., model="azureai")
```
For the public API, two models are available:
- `timegpt-1` (default)
- `timegpt-1-long-horizon`
See
[
this tutorial
](https://docs.nixtla.io/docs/tutorials-long_horizon_forecasting)
to learn how to use `timegpt-1-long-horizon`.
</Info>
</Accordion>
</AccordionGroup>
<CardGroup>
<Card>
<Card title="Inspection">
Review the first rows of the historical predictions:
```python Inspect Historical Predictions
timegpt_fcst_with_history_df.head()
```
| | timestamp | TimeGPT |
| ----- | ------------ | ------------ |
| 0 | 1951-01-01 | 135.483673 |
| 1 | 1951-02-01 | 144.442398 |
| 2 | 1951-03-01 | 157.191910 |
| 3 | 1951-04-01 | 148.769363 |
| 4 | 1951-05-01 | 140.472946 |
</Card>
</Card>
<Card>
<Card title="Compare Observed & Predicted">
Plot the observed time series against both historical and future predictions for a consolidated view:
```python Plot Observed vs Predictions
nixtla_client.plot(df, timegpt_fcst_with_history_df, time_col='timestamp', target_col='value')
```
<Frame caption="Historical and Future Predictions Plot">
![Historical and Future Predictions Plot](https://raw.githubusercontent.com/Nixtla/nixtla/readme_docs/nbs/_docs/docs/tutorials/09_historical_forecast_files/figure-markdown_strict/cell-14-output-1.png)
</Frame>
</Card>
</Card>
</CardGroup>
<Info>
Note that initial values of the dataset are not included in the historical forecasts. The model needs a certain number of observations before it can begin generating historical predictions. These early points serve as input data and cannot themselves be forecasted.
</Info>
</Step>
</Steps>
\ No newline at end of file
---
title: "Validation"
description: "Learn how to validate time series models with cross-validation and historical forecasts"
icon: "check"
---
<Info>
Time series data can be highly variable. Validating your model's accuracy and reliability is crucial for confident forecasting.
</Info>
One of the primary challenges in time series forecasting is the inherent uncertainty and variability over time. It is therefore critical to validate the accuracy and reliability of the models you use.
`TimeGPT` provides capabilities for cross-validation and historical forecasts to assist in validating your predictions.
<Steps>
<Step title="Step 1: Understand Validation Goals">
Before you begin, clarify what you want to achieve with your validation process. For example:
- Measure performance over different time windows.
- Evaluate historical forecasts for accuracy insight.
</Step>
<Step title="Step 2: Choose Your Validation Method">
Decide whether cross-validation, historical forecasting, or both suit your scenario. Consult the resources below to learn how to implement each approach.
</Step>
<Step title="Step 3: Implement & Assess">
Implement your validation method in a controlled environment. Review performance metrics such as Root Mean Squared Error (RMSE) or Mean Absolute Error (MAE) to determine success.
</Step>
</Steps>
## What You Will Learn
<CardGroup cols={2}>
<Card title="Cross-Validation" cta="Learn More" href="https://docs.nixtla.io/docs/tutorials-cross_validation" icon="chart-bar">
Learn how to perform time series cross-validation across multiple consecutive windows of your data.
</Card>
<Card title="Historical Forecasts" cta="Learn More" href="https://docs.nixtla.io/docs/tutorials-historical_forecast" icon="clock">
Learn how to generate historical forecasts within sample data to validate how `TimeGPT` would have performed historically, offering deeper insights into your model's accuracy.
</Card>
</CardGroup>
<AccordionGroup>
<Accordion title="Why Cross-Validation Is Crucial">
Cross-validation helps you evaluate your model's ability to generalize by testing it on multiple consecutive time windows.
By doing so, you gain confidence that the model isn't overfitting to a single period of data.
</Accordion>
<Accordion title="Why Generate Historical Forecasts">
Historical forecasts provide insight into how your model would have performed in real-world conditions.
These forecasts simulate past scenarios and compare predictions to actual outcomes, helping refine your approach and understanding of model performance.
</Accordion>
</AccordionGroup>
<Check>
By combining cross-validation and historical forecasts, you can get a comprehensive view of how reliable your time series predictions are.
</Check>
## Example Usage
Below is a simple example of how you might set up a validation workflow in code:
```python Validation Workflow Example
import pandas as pd
from nixtla import TimeGPT
# 1. Prepare your data
df = pd.read_csv('time_series_data.csv')
# 2. Initialize TimeGPT
timegpt = TimeGPT(api_key="YOUR_API_KEY")
# 3. Cross-Validation
# Specify number of folds and other parameters
cv_scores = timegpt.cross_validate(df, target_column="sales", n_folds=5)
print("Cross-validation scores:", cv_scores)
# 4. Historical Forecast
historical_forecasts = timegpt.historical_forecast(df, target_column="sales")
print("Historical Forecast:", historical_forecasts.head())
# 5. Evaluate results
# Compare model insights from both cross-validation and historical forecasts
```
<Warning>
Always ensure your validation data is representative of real-world conditions. Avoid data leakage by not including future data when training.
</Warning>
<Info>
For more in-depth usage and parameter configurations, refer to the official [Cross-Validation](https://docs.nixtla.io/docs/tutorials-cross_validation) and [Historical Forecasting](https://docs.nixtla.io/docs/tutorials-historical_forecast) documentation.
</Info>
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment