"vscode:/vscode.git/clone" did not exist on "b62439d71eaf154ea8dcd9d3f78b6e264e39e013"
Commit f42429f6 authored by bailuo's avatar bailuo
Browse files

readme

parents
# How to contribute
## Did you find a bug?
* Ensure the bug was not already reported by searching on GitHub under Issues.
* If you're unable to find an open issue addressing the problem, open a new one. Be sure to include a title and clear description, as much relevant information as possible, and a code sample or an executable test case demonstrating the expected behavior that is not occurring.
* Be sure to add the complete error messages.
## Do you have a feature request?
* Ensure that it hasn't been yet implemented in the `main` branch of the repository and that there's not an Issue requesting it yet.
* Open a new issue and make sure to describe it clearly, mention how it improves the project and why its useful.
## Do you want to fix a bug or implement a feature?
Bug fixes and features are added through pull requests (PRs).
## PR submission guidelines
* Keep each PR focused. While it's more convenient, do not combine several unrelated fixes together. Create as many branches as needing to keep each PR focused.
* Ensure that your PR includes a test that fails without your patch, and passes with it.
* Ensure the PR description clearly describes the problem and solution. Include the relevant issue number if applicable.
* Do not mix style changes/fixes with "functional" changes. It's very difficult to review such PRs and it most likely get rejected.
* Do not add/remove vertical whitespace. Preserve the original style of the file you edit as much as you can.
* Do not turn an already submitted PR into your development playground. If after you submitted PR, you discovered that more work is needed - close the PR, do the required work and then submit a new PR. Otherwise each of your commits requires attention from maintainers of the project.
* If, however, you submitted a PR and received a request for changes, you should proceed with commits inside that PR, so that the maintainer can see the incremental fixes and won't need to review the whole PR again. In the exception case where you realize it'll take many many commits to complete the requests, then it's probably best to close the PR, do the work and then submit it again. Use common sense where you'd choose one way over another.
### Local setup for working on a PR
#### Clone the repository
* HTTPS: `git clone https://github.com/Nixtla/nixtla.git`
* SSH: `git clone git@github.com:Nixtla/nixtla.git`
* GitHub CLI: `gh repo clone Nixtla/nixtla`
#### Set up an environment
Create a virtual environment to install the library's dependencies. We recommend [astral's uv](https://github.com/astral-sh/uv).
Once you've created the virtual environment you should activate it and then install the library in editable mode along with its
development dependencies.
```bash
pip install uv
uv venv --python 3.11
source .venv/bin/activate
uv pip install -Ue .[dev]
# If you plan to contribute to documentation, you will also need to install the
# distributed dependencies in addition to the dev dependencies
uv pip install -Ue .[dev,distributed]
```
#### Set Up Nixtla API Key
This library uses `python-dotenv` for development. To set up your Nixtla API key, add the following lines to your `.env` file:
```sh
NIXTLA_API_KEY=<your token>
```
* NOTE: You can get your Nixtla API key by logging into [Nixtla Dashboard](https://dashboard.nixtla.io/) where you can get few API calls for free. If you need more API calls for development purpose, please write to `support@nixtla.io`.
#### Install pre-commit
```sh
pre-commit install
pre-commit run --show-diff-on-failure --files nixtla/*
```
#### Viewing documentation locally
The new documentation pipeline relies on `quarto`, `mintlify` and `lazydocs`.
##### Install `quarto`
Install `quarto` from &rarr; [this link](https://quarto.org/docs/get-started/)
##### Install mintlify
```sh
npm i -g mint
```
For additional instructions, you can read about it &rarr; [this link](https://mintlify.com/docs/installation).
```sh
uv pip install -e '.[dev]' lazydocs
make all_docs
```
Finally to view the documentation
```sh
make preview_docs
```
### Running tests
```sh
pytest nixtla_tests
```
If you're working on the local interface you can just use `pytest nixtla_tests`
## Do you want to contribute to the documentation?
Docs are automatically created from the notebooks in the `nbs` folder.
### Modifying an existing doc
#### For scripts
* The docs are automatically generated from the docstrings in the `nixtla` folder.
* To contribute, ensure your docstrings follow the Google style format.
* Once your docstring is correctly written, the documentation framework will scrape it and regenerate the corresponding `.mdx` files and your changes will then appear in the updated docs.
* To contribute, examples/how-to-guides, make sure you submit clean notebooks, with cleared formatted LaTeX, links and images.
* Make an appropriate entry in the `docs/mintlify/mint.json` file.
#### For notebooks
1. Find the relevant notebook.
2. Make your changes.
* Do not rename the document.
* Do not change the first header (title). The first header is used in Readme.com to create the filename. For example, a first header of `TimeGPT Subscription Plans and Pricing` in folder `getting-started` will result in the following online link to the document: `https://docs.nixtla.io/docs/getting-started-timegpt_subscription_plans_and_pricing`.
3. Run all cells.
4. Add, commit and push the changes.
5. Open a PR.
6. Follow the steps under 'Publishing documentation'
### Creating a new document
1. Copy an existing jupyter notebook in a folder where you want to create a new document. This should be a subfolder of `nbs/docs`.
2. Rename the document using the following format: `[document_number]_document_title_in_lower_case.ipynb` (for example: `01_quickstart.ipynb`), incrementing the document number from the current highest number within the folder and retaining the leading zero.
3. The first header (title) is ideally the same as the notebook name (without the document number). This is because in Readme.com the first header (title) is used to create the filename. For example, a first header of `TimeGPT Subscription Plans and Pricing` of a document in folder `getting-started` will result in the following online link to the document: `https://docs.nixtla.io/docs/getting-started-timegpt_subscription_plans_and_pricing`. Thus, it is advised to keep the document name and header the same.
4. Work on your new document. Pay attention to:
* The Google Colab link;
* How images should be linked;
* How the `IN_COLAB` variable is used to distinguish when the notebook is used locally vs in Google Colab.
5. Add the document to `docs/mintlify/mint.json` under the correct group with the following name `path-to-document/document_title_in_lower_case`.
6. Follow steps 3 - 8 under `Modifying an existing doc`.
### Publishing documentation
When the PR is approved, the documentation will not be visible directly. It will be visible:
1. When we make a release
2. When you manually trigger the workflows required to publish. The workflows you need to manually trigger under [Actions](https://github.com/Nixtla/nixtla/actions), in order, are:
1. The `build-docs` workflow on branch `main`. Use the `Run workflow` button on the right and choose the `main` branch.
2. The `Deploy to readme dot com` workflow on branch `main`. Use the `Run workflow` button on the right and choose the `main` branch.
* After both workflows have completed (should take max. 10 minutes), check the [docs](https://docs.nixtla.io/) to see if your changes have been reflected.
It could be that on our Readme.com [docs](https://docs.nixtla.io/), the newly created document is not in the correct (sub)folder.
1. Go to the `Log In` (top right corner), log in with your Nixtla account.
2. Go to the Admin Dashboard (top right, under user-name)
3. On the left, go to `Guides`. You now see an overview of the documentation and the structure.
4. Simply drag and drop the document that is in the incorrect (sub)folder to the correct (sub)folder. The document will from hereon remain in the correct (sub)folder, even if you update its contents.
Make sure to check that our [Mintlify docs](https://nixtlaverse.nixtla.io/nixtla/docs/getting-started/introduction.html) also work as expected, and your change is reflected there too. Mintlify is commonly somewhat slower syncing the docs, so it could a bit more time to reflect the change.
### Do's and don'ts
* Don't rename documents! The filename is used statically in various files to properly index the file in the correct (sub)folder. If you rename, you're effectively creating a new document. Follow the correct procedure for creating a new document (above), and check every other document (yes, every single one) in our documentation whether there's a link now breaking to the doc you renamed.
* Check the changes / new document online in both [Readme.com](https://docs.nixtla.io/) and [Mintlify](https://nixtlaverse.nixtla.io/nixtla/docs/getting-started/introduction.html).
* Screwed up? You can hide a document in Readme.com in the Admin console, under `Guides`. Make sure to unhide it again after you've fixed your misstakes.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION FOR THE PYTHON SDK
(FOR TERMS AND CONDITIONS OF TIME GPT VISIT https://docs.nixtla.io/docs/terms-and-conditions)
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2022 Nixtla
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
include settings.ini
include LICENSE
include CONTRIBUTING.md
include README.md
recursive-exclude * __pycache__
devenv:
uv sync --quiet --all-groups --all-extras --frozen
uv run --no-sync pre-commit install
init_codespace:
npm install -g @anthropic-ai/claude-code@1.0.127
npm i -g mint
git pull || true
uv sync --quiet --all-groups --all-extras --frozen
jupyter:
mkdir -p tmp
jupyter lab --port=8888 --ip=0.0.0.0 --no-browser --allow-root --NotebookApp.token='' --NotebookApp.password='' --NotebookApp.allow_origin='*'
load_docs_scripts:
# load processing scripts
if [ ! -d "docs-scripts" ] ; then \
git clone -b scripts https://github.com/Nixtla/docs.git docs-scripts --single-branch; \
fi
api_docs:
lazydocs .nixtla --no-watermark
python docs/to_mdx.py
examples_docs:
mkdir -p nbs/_extensions
cp -r docs-scripts/mintlify/ nbs/_extensions/mintlify
quarto render nbs --output-dir ../docs/mintlify/
format_docs:
# replace _docs with docs
sed -i -e 's/_docs/docs/g' ./docs-scripts/docs-final-formatting.bash
bash ./docs-scripts/docs-final-formatting.bash
find docs/mintlify -name "*.mdx" -exec sed -i.bak '/^:::/d' {} + && find docs/mintlify -name "*.bak" -delete
find docs/mintlify -name "*.mdx" -exec sed -i.bak 's/<support@nixtla\.io>/\\<support@nixtla.io\\>/g' {} + && find docs/mintlify -name "*.bak" -delete
preview_docs:
cd docs/mintlify && mintlify dev
clean:
rm -f docs/*.md
find docs/mintlify -name "*.mdx" -exec rm -f {} +
all_docs: load_docs_scripts api_docs examples_docs format_docs
licenses:
pip-licenses --format=csv --with-authors --with-urls > third_party_licenses.csv
python scripts/filter_licenses.py
rm -f third_party_licenses.csv
@echo "✓ THIRD_PARTY_LICENSES.md updated"
\ No newline at end of file
# TimeGPT-1
## 论文
[TimeGPT-1](https://arxiv.org/abs/2310.03589)
## 模型简介
TimeGPT 是一个基于 Transformer 的时间序列模型,能够为训练过程中看不到的各种数据集生成准确的预测。
![alt text](image.png)
## 环境依赖
- 列举基础环境需求,根据实际情况填写
| 软件 | 版本 |
| :------: | :------: |
| DTK | 25.04.1 |
| python | 3.11 |
| torch | 2.4.1+das.opt1.dtk25041 |
推荐使用镜像:
- 挂载地址 `-v` 根据实际模型情况修改
```bash
docker run -it --shm-size 50g --network=host --name timegpt --privileged --device=/dev/kfd --device=/dev/dri --device=/dev/mkfd --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -u root -v /opt/hyhal/:/opt/hyhal/:ro -v /path/your_code_path/:/path/your_code_path/ image.sourcefind.cn:5000/dcu/admin/base/pytorch:2.4.1-ubuntu22.04-dtk25.04.1-py3.11 bash
```
更多镜像可前往[光源](https://sourcefind.cn/#/service-list)下载使用。
关于本项目DCU显卡所需的特殊深度学习库可从[光合](https://developer.sourcefind.cn/tool/)开发者社区下载安装,其它包参照requirements.txt安装:
```
pip install -r requirements.txt
```
## 数据集
暂无
## 训练
暂无
<!-- ### 单机训练
```bash
``` -->
<!-- ### 多机训练
```bash
``` -->
## 推理
### 单机推理
```bash
# Get your API Key at dashboard.nixtla.io
# 初始化
nixtla_client = NixtlaClient(api_key = 'YOUR API KEY HERE')
# 以电力数据为例
df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/electricity-short.csv')
# 预测
fcst_df = nixtla_client.forecast(df, h=24, level=[80, 90])
```
<!-- ### 多机推理
```bash
``` -->
### 精度
DCU与GPU精度一致
<!-- ## 预训练权重
| 模型名称 | 权重大小 | DCU型号 | 最低卡数需求 |下载地址|
|:-----:|:----------:|:----------:|:---------------------:|:----------:|
| | - | K100AI | 1 | [下载地址]() | -->
暂无
## 源码仓库及问题反馈
- https://developer.sourcefind.cn/codes/modelzoo/timegpt-pytorch
## 参考资料
- https://github.com/Nixtla/nixtla
# Nixtla &nbsp; [![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?text=Statistical%20Forecasting%20Algorithms%20by%20Nixtla%20&url=https://github.com/Nixtla/neuralforecast&via=nixtlainc&hashtags=StatisticalModels,TimeSeries,Forecasting) &nbsp;[![Slack](https://img.shields.io/badge/Slack-4A154B?&logo=slack&logoColor=white)](https://join.slack.com/t/nixtlacommunity/shared_invite/zt-1pmhan9j5-F54XR20edHk0UtYAPcW4KQ)
<div align="center">
<img src="https://raw.githubusercontent.com/Nixtla/neuralforecast/main/nbs/imgs_indx/logo_new.png"/>
<h1 align="center">TimeGPT-1 </h1>
<h3 align="center">The first foundation model for forecasting and anomaly detection</h3>
[![CI](https://github.com/Nixtla/nixtla/actions/workflows/ci.yaml/badge.svg?branch=main)](https://github.com/Nixtla/nixtla/actions/workflows/ci.yaml)
[![PyPi](https://img.shields.io/pypi/v/nixtla?color=blue)](https://pypi.org/project/nixtla/)
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://github.com/Nixtla/nixtla/blob/main/LICENSE)
[![docs](https://img.shields.io/website-up-down-green-red/http/docs.nixtla.io/.svg?label=docs)](https://docs.nixtla.io)
[![Downloads](https://pepy.tech/badge/nixtla)](https://pepy.tech/project/nixtla)
[![Downloads](https://pepy.tech/badge/nixtla/month)](https://pepy.tech/project/nixtla)
[![Downloads](https://pepy.tech/badge/nixtla/week)](https://pepy.tech/project/nixtla)
**TimeGPT** is a production ready, generative pretrained transformer for time series. It's capable of accurately predicting various domains such as retail, electricity, finance, and IoT with just a few lines of code 🚀.
</div>
## 🚀 Quick Start
https://github.com/Nixtla/nixtla/assets/4086186/163ad9e6-7a16-44e1-b2e9-dab8a0b7b6b6
### Install nixtla's SDK
```python
pip install nixtla>=0.7.0
```
### Import libraries and load data
```python
import pandas as pd
from nixtla import NixtlaClient
```
### Forecast using TimeGPT in 3 easy steps
```python
# Get your API Key at dashboard.nixtla.io
# 1. Instantiate the NixtlaClient
nixtla_client = NixtlaClient(api_key = 'YOUR API KEY HERE')
# 2. Read historic electricity demand data
df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/electricity-short.csv')
# 3. Forecast the next 24 hours
fcst_df = nixtla_client.forecast(df, h=24, level=[80, 90])
# 4. Plot your results (optional)
nixtla_client.plot(df, fcst_df, level=[80, 90])
```
![Forecast Results](./nbs/img/forecast_readme.png)
### Anomaly detection using TimeGPT in 3 easy steps
```python
# Get your API Key at dashboard.nixtla.io
# 1. Instantiate the NixtlaClient
nixtla_client = NixtlaClient(api_key = 'YOUR API KEY HERE')
# 2. Read Data # Wikipedia visits of NFL Star (
df = pd.read_csv('https://datasets-nixtla.s3.amazonaws.com/peyton-manning.csv')
# 3. Detect Anomalies
anomalies_df = nixtla_client.detect_anomalies(df, time_col='timestamp', target_col='value', freq='D')
# 4. Plot your results (optional)
nixtla_client.plot(df, anomalies_df,time_col='timestamp', target_col='value')
```
![AnomalyDetection](nbs/img/anomaly.png)
## 🤓 API support for other languages
Explore our [API Reference](https://docs.nixtla.io) to discover how to leverage TimeGPT across various programming languages including JavaScript, Go, and more.
## 🔥 Features and Capabilities
- **Zero-shot Inference**: TimeGPT can generate forecasts and detect anomalies straight out of the box, requiring no prior training data. This allows for immediate deployment and quick insights from any time series data.
- **Fine-tuning**: Enhance TimeGPT's capabilities by fine-tuning the model on your specific datasets, enabling the model to adapt to the nuances of your unique time series data and improving performance on tailored tasks.
- **API Access**: Integrate TimeGPT seamlessly into your applications via our robust API. Upcoming support for Azure Studio will provide even more flexible integration options. Alternatively, deploy TimeGPT on your own infrastructure to maintain full control over your data and workflows.
- **Add Exogenous Variables**: Incorporate additional variables that might influence your predictions to enhance forecast accuracy. (E.g. Special Dates, events or prices)
- **Multiple Series Forecasting**: Simultaneously forecast multiple time series data, optimizing workflows and resources.
- **Custom Loss Function**: Tailor the fine-tuning process with a custom loss function to meet specific performance metrics.
- **Cross Validation**: Implement out of the box cross-validation techniques to ensure model robustness and generalizability.
- **Prediction Intervals**: Provide intervals in your predictions to quantify uncertainty effectively.
- **Irregular Timestamps**: Handle data with irregular timestamps, accommodating non-uniform interval series without preprocessing.
## 📚 Documentation with examples and use cases
Dive into our [comprehensive documentation](https://docs.nixtla.io/docs/getting-started-timegpt_quickstart) to discover examples and practical use cases for TimeGPT. Our documentation covers a wide range of topics, including:
- **Getting Started**: Begin with our user-friendly [Quickstart Guide](https://docs.nixtla.io/docs/getting-started-timegpt_quickstart) and learn how to [set up your API key](https://docs.nixtla.io/docs/getting-started-setting_up_your_api_key) effortlessly.
- **Advanced Techniques**: Master advanced forecasting methods and learn how to enhance model accuracy with our tutorials on [anomaly detection](https://docs.nixtla.io/docs/tutorials-anomaly_detection), fine-tuning models using specific loss functions, and scaling computations across distributed frameworks such as [Spark, Dask, and Ray](https://docs.nixtla.io/docs/tutorials-computing_at_scale).
- **Specialized Topics**: Explore specialized topics like [handling exogenous variables](https://docs.nixtla.io/docs/tutorials-holidays_and_special_dates), model validation through [cross-validation](https://docs.nixtla.io/docs/tutorials-cross_validation), and strategies for [forecasting under uncertainty](https://docs.nixtla.io/docs/tutorials-uncertainty_quantification).
- **Real World Applications**: Uncover how TimeGPT is applied in real-world scenarios through case studies on [forecasting web traffic](https://docs.nixtla.io/docs/use-cases-forecasting_web_traffic) and [predicting Bitcoin prices](https://docs.nixtla.io/docs/use-cases/bitcoin_price_prediction).
## 🗞️ TimeGPT1 Revolutionizing Forecasting and Anomaly Detection
Time series data is pivotal across various sectors, including finance, healthcare, meteorology, and social sciences. Whether it's monitoring ocean tides or tracking the Dow Jones's daily closing values, time series data is crucial for forecasting and decision-making.
Traditional analysis methods such as ARIMA, ETS, MSTL, Theta, CES, machine learning models like XGBoost and LightGBM, and deep learning approaches have been standard tools for analysts. However, TimeGPT introduces a paradigm shift with its standout performance, efficiency, and simplicity. Thanks to its zero-shot inference capability, TimeGPT streamlines the analytical process, making it accessible even to users with minimal coding experience.
TimeGPT is user-friendly and low-code, enabling users to upload their time series data and either generate forecasts or detect anomalies with just a single line of code. As the only foundation model for time series analysis out of the box, TimeGPT can be integrated via our public APIs, through Azure Studio (coming soon), or deployed on your own infrastructure.
## ⚙️ TimeGPT's Architecture
Self-attention, the revolutionary concept introduced by the paper “Attention is all you need“, is the basis of the this foundational model. The TimeGPT model is not based on any existing large language model(LLMs). It is independently trained on vast timeseries dataset as a large transformer model and is designed so as to minimize the forecasting error.
The architecture consists of an encoder-decoder structure with
multiple layers, each with residual connections and layer normalization. Finally, a linear layer maps the decoder’s output to the forecasting window dimension. The general intuition is that attentionbased mechanisms are able to capture the diversity of past events and correctly extrapolate potential
future distributions.
![Arquitecture](nbs/img/forecast.png)
TimeGPT was trained on, to our knowledge, the largest collection of publicly available time series,
collectively encompassing over 100 billion data points. This training set incorporates time series
from a broad array of domains, including finance, economics, demographics, healthcare, weather,
IoT sensor data, energy, web traffic, sales, transport, and banking. Due to this diverse set of domains,
the training dataset contains time series with a wide range of characteristics
---
## ⚡️ Zero-shot Results
### Accuracy
TimeGPT has been tested for its zero-shot inference capabilities on more than 300K unique series, which involve using the model without additional fine-tuning on the test dataset. TimeGPT outperforms a comprehensive range of well-established statistical and cutting-edge deep learning models, consistently ranking among the top three performers across various frequencies.
### Ease of use
TimeGPT also excels by offering simple and rapid predictions using a pre-trained model. This stands in stark contrast to other models that typically require an extensive training and prediction pipeline.
![Results](nbs/img/results.jpg)
### Efficiency and Speed
For zero-shot inference, our internal tests recorded an average GPU inference speed of 0.6 milliseconds per series for TimeGPT, which nearly mirrors that of the simple Seasonal Naive.
## 📝 How to cite?
If you find TimeGPT useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2310.03589):
```
@misc{garza2023timegpt1,
title={TimeGPT-1},
author={Azul Garza and Max Mergenthaler-Canseco},
year={2023},
eprint={2310.03589},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
## 🎉 Features and Mentions
TimeGPT has been featured in many publications and has been recognized for its innovative approach to time series forecasting. Here are some of the features and mentions:
- [TimeGPT Revolutionizing Time Series Forecasting](https://www.analyticsvidhya.com/blog/2024/02/timegpt-revolutionizing-time-series-forecasting/)
- [TimeGPT: The First Foundation Model for Time Series Forecasting](https://towardsdatascience.com/timegpt-the-first-foundation-model-for-time-series-forecasting-bf0a75e63b3a)
- [TimeGPT: Revolutionising Time Series Forecasting with Generative Models](https://medium.com/@22meera99/timegpt-revolutionising-time-series-forecasting-with-generative-models-86be6c09fa51)
- [TimeGPT on Turing Post](https://www.turingpost.com/p/timegpt)
- [TimeGPT Presentation at AWS Events](https://www.youtube.com/watch?v=5pYkT0rTCfE&ab_channel=AWSEvents)
- [TimeGPT: Machine Learning for Time Series Made Accessible - Podcast](https://podcasts.apple.com/bg/podcast/timegpt-machine-learning-for-time-series-made-accessible/id1487704458?i=1000638551991)
- [TimeGPT on The Data Exchange](https://thedataexchange.media/timegpt/)
- [How TimeGPT Transforms Predictive Analytics with AI](https://hackernoon.com/how-timegpt-transforms-predictive-analytics-with-ai)
- [TimeGPT: The First Foundation Model - AI Horizon Forecast](https://aihorizonforecast.substack.com/p/timegpt-the-first-foundation-model)
## 🔖 License
TimeGPT is closed source. However, this SDK is open source and available under the Apache 2.0 License. Feel free to contribute (check out the [Contributing](https://github.com/Nixtla/nixtla/blob/main/CONTRIBUTING.md) guide for more details).
## 📞 Get in touch
For any questions or feedback, please feel free to reach out to us at ops [at] nixtla.io.
| Name | Version | License | Author | URL |
|:---------------------|:----------|:--------------------------------------------------------|:-------------------------------------|:---------------------------------------------------|
| certifi | 2025.10.5 | Mozilla Public License 2.0 (MPL 2.0) | Kenneth Reitz | https://github.com/certifi/python-certifi |
| fqdn | 1.5.1 | Mozilla Public License 2.0 (MPL 2.0) | ypcrts | https://github.com/ypcrts/fqdn |
| pathspec | 0.12.1 | Mozilla Public License 2.0 (MPL 2.0) | "Caleb P. Burns" <cpburnz@gmail.com> | UNKNOWN |
| pyreadr | 0.5.2 | GNU Affero General Public License v3 or later (AGPLv3+) | Otto Fajardo | https://github.com/ofajardo/pyreadr |
| pytest-rerunfailures | 16.1 | MPL-2.0 | Leah Klearman <lklrmn@gmail.com> | https://github.com/pytest-dev/pytest-rerunfailures |
| tqdm | 4.67.1 | MIT License; Mozilla Public License 2.0 (MPL 2.0) | UNKNOWN | https://tqdm.github.io |
\ No newline at end of file
import os
import fire
import requests
token = os.environ["GITHUB_TOKEN"]
pr_number = os.environ["PR_NUMBER"]
headers = {
"Authorization": f"token {token}",
"Accept": "application/vnd.github.v3+json",
}
base_url = "https://api.github.com/repos/Nixtla/nixtla/issues"
def get_comments():
resp = requests.get(f"{base_url}/{pr_number}/comments", headers=headers)
if resp.status_code != 200:
raise RuntimeError(resp.text)
return resp.json()
def upsert_comment(body: str, comment_id: str | None):
data = {"body": body}
if comment_id is None:
resp = requests.post(
f"{base_url}/{pr_number}/comments", json=data, headers=headers
)
else:
resp = requests.patch(
f"{base_url}/comments/{comment_id}", json=data, headers=headers
)
return resp
def main(search_term: str, file: str):
comments = get_comments()
existing_comment = [
c for c in comments if search_term in c["body"] and c["user"]["type"] == "Bot"
]
if existing_comment:
comment_id = existing_comment[0]["id"]
else:
comment_id = None
with open(file, "rt") as f:
summary = f.read()
resp = upsert_comment(summary, comment_id)
if resp.status_code not in (200, 201, 202):
raise RuntimeError(f"{resp.status_code}: {resp.text}")
if __name__ == "__main__":
fire.Fire(main)
experiments:
- air-passengers:
- dataset_url: https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv
- time_col: timestamp
- target_col: value
- season_length: 12 # for benchmarks
- freq:
- MS
- h:
- 12
- 24
- electricity-multiple-series:
- dataset_url: https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/ercot_multiple_ts.csv
- season_length: 24 # for benchmarks
- time_col: timestamp
- target_col: value
- freq:
- H
- h:
- 24
- 168
- 336
import logging
import os
from time import time
from typing import List, Optional, Tuple
import pandas as pd
import yaml
from dotenv import load_dotenv
from statsforecast import StatsForecast
from statsforecast.models import Naive, SeasonalNaive
from utilsforecast.evaluation import evaluate
from utilsforecast.losses import mae, mape, mse
from nixtla import NixtlaClient
logger = logging.getLogger(__name__)
load_dotenv()
class Experiment:
"""
This class represents an experiment for evaluating the performance of different models.
The main method, evaluate_performance, is intended to be called for different models.
"""
def __init__(
self,
df: pd.DataFrame,
experiment_name: str,
id_col: str,
time_col: str,
target_col: str,
h: int,
season_length: int,
# Freq cannot be infered
# because of StatsForecast
freq: str,
level: Optional[List[int]] = None,
n_windows: int = 1, # @A: this should be replaced with cross validation
):
self.df = df
self.experiment_name = experiment_name
self.id_col = id_col
self.time_col = time_col
self.target_col = target_col
self.h = h
self.season_length = season_length
self.freq = freq
self.level = level
self.n_windows = n_windows
self.eval_index = [
"experiment_name",
"h",
"season_length",
"freq",
"level",
"n_windows",
"metric",
]
(
self.df_train,
self.df_test,
self.df_cutoffs,
self.has_id_col,
self.comb_cv,
) = self._split_df(df)
self.benchmark_models = ["SeasonalNaive", "Naive"]
def _split_df(
self, df: pd.DataFrame
) -> Tuple[pd.DataFrame, pd.DataFrame, pd.DataFrame, bool, List]:
has_id_col = self.id_col in df
if has_id_col:
df_test = df.groupby(self.id_col).tail(self.h)
comb_cv = [self.id_col, self.time_col]
else:
df_test = df.tail(self.h)
comb_cv = [self.time_col]
df_train = df.drop(df_test.index)
if has_id_col:
df_cutoffs = (
df_train.groupby(self.id_col)[[self.time_col]].max().reset_index()
)
else:
df_cutoffs = df_train[[self.time_col]].max().to_frame().T
df_cutoffs = df_cutoffs.rename(
columns={
self.time_col: "cutoff",
}
)
return df_train, df_test, df_cutoffs, has_id_col, comb_cv
def _evaluate_cv(
self, cv_df: pd.DataFrame, total_time: float, model: str
) -> pd.DataFrame:
metrics = [mae, mse, mape]
if not self.has_id_col:
cv_df = cv_df.assign(unique_id="ts_0")
eval_df = cv_df.groupby("cutoff").apply(
lambda df_cutoff: evaluate(
df_cutoff,
metrics=metrics,
models=[model],
id_col=self.id_col,
time_col=self.time_col,
target_col=self.target_col,
)
)
eval_df = eval_df.reset_index().drop(columns="level_1")
eval_df = eval_df.groupby(["metric"]).mean(numeric_only=True)
eval_df = eval_df.reset_index()
if len(eval_df) != len(metrics):
raise ValueError(f"Expected only {len(metrics)} metrics")
eval_df = pd.concat(
[eval_df, pd.DataFrame({"metric": ["total_time"], model: [total_time]})]
)
for attr in reversed(self.eval_index):
if attr not in eval_df.columns:
eval_df.insert(0, attr, getattr(self, attr))
return eval_df
def _convert_fcst_df_to_cv_df(self, fcst_df: pd.DataFrame) -> pd.DataFrame:
if self.has_id_col:
# add cutoff column
cv_df = fcst_df.merge(self.df_cutoffs, on=[self.id_col])
# add y column
merge_cols = [self.id_col, self.time_col]
else:
# add cutoff column
cv_df = fcst_df.assign(cutoff=self.df_cutoffs["cutoff"].iloc[0])
# add y column
merge_cols = [self.time_col]
cv_df = cv_df.merge(
self.df_test[merge_cols + [self.target_col]],
on=merge_cols,
)
return cv_df
def evaluate_timegpt(self, model: str) -> Tuple[pd.DataFrame, pd.DataFrame]:
init_time = time()
# A: this sould be replaced with
# cross validation
timegpt = NixtlaClient()
fcst_df = timegpt.forecast(
df=self.df_train,
X_df=(
self.df_test.drop(columns=self.target_col)
if self.df.shape[1] > 3
else None
),
h=self.h,
freq=self.freq,
level=self.level,
id_col=self.id_col,
time_col=self.time_col,
target_col=self.target_col,
model=model,
)
cv_df = self._convert_fcst_df_to_cv_df(fcst_df)
total_time = time() - init_time
cv_df = cv_df.rename({"TimeGPT": model}, axis=1)
eval_df = self._evaluate_cv(cv_df, total_time, model)
return eval_df, cv_df.drop(columns=[self.target_col, "cutoff"])
def evaluate_benchmark_performace(self) -> Tuple[pd.DataFrame, pd.DataFrame]:
eval_df = []
cv_df = []
# wee need to rename columns if needed
renamer = {
self.id_col: "unique_id",
self.time_col: "ds",
self.target_col: "y",
}
df = self.df.copy()
if not self.has_id_col:
df[self.id_col] = "ts_0"
df = df.rename(columns=renamer)
for model in [SeasonalNaive(season_length=self.season_length), Naive()]:
sf = StatsForecast(freq=self.freq, models=[model])
init_time = time()
cv_model_df = sf.cross_validation(
df=df,
h=self.h,
n_windows=self.n_windows,
step_size=self.h,
)
total_time = time() - init_time
cv_model_df = cv_model_df.rename(
columns={value: key for key, value in renamer.items()}
)
eval_model_df = self._evaluate_cv(cv_model_df, total_time, repr(model))
eval_model_df = eval_model_df.set_index(self.eval_index)
eval_df.append(eval_model_df)
cv_df.append(cv_model_df.set_index([self.id_col, self.time_col, "cutoff"]))
eval_df = pd.concat(eval_df, axis=1).reset_index()
cv_df = pd.concat(cv_df, axis=1).reset_index()
if not self.has_id_col:
cv_df = cv_df.drop(columns=[self.id_col])
return eval_df, cv_df.drop(columns=[self.target_col, "cutoff"])
def plot_and_save_forecasts(self, cv_df: pd.DataFrame, plot_dir: str) -> str:
"""Plot ans saves forecasts, returns the path of the plot"""
timegpt = NixtlaClient()
df = self.df.copy()
df[self.time_col] = pd.to_datetime(df[self.time_col])
if not self.has_id_col:
df[self.id_col] = "ts_0"
cv_df[self.time_col] = pd.to_datetime(cv_df[self.time_col])
fig = timegpt.plot(
df[[self.id_col, self.time_col, self.target_col]],
cv_df,
max_insample_length=self.h * (self.n_windows + 4),
id_col=self.id_col,
time_col=self.time_col,
target_col=self.target_col,
)
path = "plot"
for attr in self.eval_index:
if hasattr(self, attr):
path += f"_{getattr(self, attr)}"
plot_path = f"{plot_dir}/{path}.png"
os.makedirs(plot_dir, exist_ok=True)
fig.savefig(plot_path, bbox_inches="tight")
return plot_path
class ExperimentConfig:
def __init__(
self,
config_path: str,
plot_dir: str,
):
self.config_path = config_path
self.plot_dir = plot_dir
self.default_models = ["timegpt-1", "timegpt-1-long-horizon"]
def _parse_yaml(self):
with open(self.config_path, "r") as file:
config = yaml.safe_load(file)
return config
def run_experiments(self):
config = self._parse_yaml()
eval_df = []
for experiment_dict in config["experiments"]:
experiment_name = list(experiment_dict.keys())[0]
experiment = {}
for d in experiment_dict[experiment_name]:
experiment.update(d)
df_url = experiment["dataset_url"]
df = pd.read_csv(df_url)
id_col = experiment.get("id_col", "unique_id")
time_col = experiment.get("time_col", "ds")
target_col = experiment.get("target_col", "y")
season_length = experiment["season_length"]
df[time_col] = pd.to_datetime(df[time_col])
# list parameters
# we will iterate over this parameters
horizons = experiment["h"]
levels = experiment.get("level", [None])
frequencies = experiment.get("freq", [None])
for h in horizons:
for level in levels:
for freq in frequencies:
logger.info(
f"Running experiment {experiment_name} with h={h}, level={level}, freq={freq}"
)
exp = Experiment(
df=df,
experiment_name=experiment_name,
id_col=id_col,
time_col=time_col,
target_col=target_col,
h=h,
freq=freq,
level=level,
season_length=season_length,
)
# Benchmark evaluation
logger.info("Running benchmark evaluation")
(
eval_bench_df,
cv_bench_df,
) = exp.evaluate_benchmark_performace()
eval_bench_df = eval_bench_df.set_index(exp.eval_index)
cv_bench_df = cv_bench_df.set_index(exp.comb_cv)
eval_models_df = [eval_bench_df]
cv_models_df = [cv_bench_df]
# models evaluation
logger.info("Running TimeGPT evaluation")
for model in self.default_models:
(
eval_model_df,
cv_model_df,
) = exp.evaluate_timegpt(model=model)
eval_model_df = eval_model_df.set_index(exp.eval_index)
eval_models_df.append(eval_model_df)
cv_model_df = cv_model_df.set_index(exp.comb_cv)
cv_models_df.append(cv_model_df)
cv_models_df = pd.concat(cv_models_df, axis=1).reset_index()
plot_path = exp.plot_and_save_forecasts(
cv_models_df, self.plot_dir
)
eval_models_df = pd.concat(eval_models_df, axis=1)
eval_models_df["plot_path"] = plot_path
eval_df.append(eval_models_df.reset_index())
eval_df = pd.concat(eval_df)
return eval_df, exp.benchmark_models
def summary_performance(
self, eval_df: pd.DataFrame, summary_path: str, benchmark_models: List[str]
):
logger.info("Summarizing performance")
models = self.default_models + benchmark_models
with open(summary_path, "w") as f:
results_comb = ["metric"] + models
exp_config = [col for col in eval_df.columns if col not in results_comb]
eval_df = eval_df.fillna("None")
f.write("<details><summary>Experiment Results</summary>\n\n")
for exp_number, (exp_desc, eval_exp_df) in enumerate(
eval_df.groupby(exp_config), start=1
):
exp_metadata = pd.DataFrame.from_dict(
{
"variable": exp_config,
"experiment": exp_desc,
}
)
experiment_name = exp_metadata.query("variable == 'experiment_name'")[
"experiment"
].iloc[0]
exp_metadata.query(
"variable not in ['plot_path', 'experiment_name']", inplace=True
)
f.write(f"## Experiment {exp_number}: {experiment_name}\n\n")
f.write("### Description:\n")
f.write(f"{exp_metadata.to_markdown(index=False)}\n\n")
f.write("### Results:\n")
f.write(
f"{eval_exp_df[results_comb].round(4).to_markdown(index=False)}\n\n"
)
f.write("### Plot:\n")
plot_path = eval_exp_df["plot_path"].iloc[0]
if plot_path.startswith("."):
plot_path = plot_path[1:]
if os.getenv("GITHUB_ACTIONS"):
plot_path = f"{os.getenv('PLOTS_REPO_URL')}/{plot_path}?raw=true"
f.write(f"![]({plot_path})\n\n")
f.write("</details>\n")
if __name__ == "__main__":
exp_config = ExperimentConfig(
config_path="./action_files/models_performance/experiments.yaml",
plot_dir="./action_files/models_performance/plots",
)
eval_df, benchmark_models = exp_config.run_experiments()
exp_config.summary_performance(
eval_df, "./action_files/models_performance/summary.md", benchmark_models
)
<details><summary>Experiment Results</summary>
## Experiment 1: air-passengers
### Description:
| variable | experiment |
|:--------------|:-------------|
| h | 12 |
| season_length | 12 |
| freq | MS |
| level | None |
| n_windows | 1 |
### Results:
| metric | timegpt-1 | timegpt-1-long-horizon | SeasonalNaive | Naive |
|:-----------|------------:|-------------------------:|----------------:|-----------:|
| mae | 12.6793 | 11.0623 | 47.8333 | 76 |
| mape | 0.027 | 0.0232 | 0.0999 | 0.1425 |
| mse | 213.936 | 199.132 | 2571.33 | 10604.2 |
| total_time | 2.4918 | 1.5065 | 0.0046 | 0.0045 |
### Plot:
![](/action_files/models_performance/plots/plot_air-passengers_12_12_MS_None_1.png)
## Experiment 2: air-passengers
### Description:
| variable | experiment |
|:--------------|:-------------|
| h | 24 |
| season_length | 12 |
| freq | MS |
| level | None |
| n_windows | 1 |
### Results:
| metric | timegpt-1 | timegpt-1-long-horizon | SeasonalNaive | Naive |
|:-----------|------------:|-------------------------:|----------------:|-----------:|
| mae | 58.1031 | 58.4587 | 71.25 | 115.25 |
| mape | 0.1257 | 0.1267 | 0.1552 | 0.2358 |
| mse | 4040.21 | 4110.79 | 5928.17 | 18859.2 |
| total_time | 0.5508 | 0.5551 | 0.0032 | 0.0028 |
### Plot:
![](/action_files/models_performance/plots/plot_air-passengers_24_12_MS_None_1.png)
## Experiment 3: electricity-multiple-series
### Description:
| variable | experiment |
|:--------------|:-------------|
| h | 24 |
| season_length | 24 |
| freq | H |
| level | None |
| n_windows | 1 |
### Results:
| metric | timegpt-1 | timegpt-1-long-horizon | SeasonalNaive | Naive |
|:-----------|------------:|-------------------------:|----------------:|---------------:|
| mae | 178.293 | 268.129 | 269.23 | 1331.02 |
| mape | 0.0234 | 0.0311 | 0.0304 | 0.1692 |
| mse | 121586 | 219467 | 213677 | 4.68961e+06 |
| total_time | 1.2402 | 1.5986 | 0.0046 | 0.0036 |
### Plot:
![](/action_files/models_performance/plots/plot_electricity-multiple-series_24_24_H_None_1.png)
## Experiment 4: electricity-multiple-series
### Description:
| variable | experiment |
|:--------------|:-------------|
| h | 168 |
| season_length | 24 |
| freq | H |
| level | None |
| n_windows | 1 |
### Results:
| metric | timegpt-1 | timegpt-1-long-horizon | SeasonalNaive | Naive |
|:-----------|------------:|-------------------------:|----------------:|---------------:|
| mae | 465.496 | 346.976 | 398.956 | 1119.26 |
| mape | 0.062 | 0.0436 | 0.0512 | 0.1583 |
| mse | 835064 | 403762 | 656723 | 3.17316e+06 |
| total_time | 0.6668 | 0.637 | 0.0046 | 0.0037 |
### Plot:
![](/action_files/models_performance/plots/plot_electricity-multiple-series_168_24_H_None_1.png)
## Experiment 5: electricity-multiple-series
### Description:
| variable | experiment |
|:--------------|:-------------|
| h | 336 |
| season_length | 24 |
| freq | H |
| level | None |
| n_windows | 1 |
### Results:
| metric | timegpt-1 | timegpt-1-long-horizon | SeasonalNaive | Naive |
|:-----------|--------------:|-------------------------:|----------------:|---------------:|
| mae | 558.702 | 459.769 | 602.926 | 1340.95 |
| mape | 0.0697 | 0.0565 | 0.0787 | 0.17 |
| mse | 1.22728e+06 | 739162 | 1.61572e+06 | 6.04619e+06 |
| total_time | 1.1054 | 1.3368 | 0.0046 | 0.0038 |
### Plot:
![](/action_files/models_performance/plots/plot_electricity-multiple-series_336_24_H_None_1.png)
</details>
#!/bin/bash
BASE_DIR="nbs/docs/"
SUB_DIRS=("getting-started" "capabilities" "deployment" "tutorials" "use-cases" "reference")
counter=0
for sub_dir in "${SUB_DIRS[@]}"; do
DIR="$BASE_DIR$sub_dir/"
if [[ -d "$DIR" ]]; then
while read -r ipynb_file; do
echo $counter
md_file="${ipynb_file%.ipynb}.md"
md_file="${md_file/docs/_docs/docs}"
quarto render "$ipynb_file" --to md --wrap=none
python -m action_files.readme_com.modify_markdown --file_path "$md_file" --slug_number "$counter"
((counter++))
done < <(find "$DIR" -type f -name "*.ipynb" -not -path "*/.ipynb_checkpoints/*" | sort)
else
echo "Directory $DIR does not exist."
fi
done
# process changelog
echo $counter
file_changelog="./nbs/_docs/docs/CHANGELOG.md"
cp ./CHANGELOG.md ${file_changelog}
python -m action_files.readme_com.modify_markdown --file_path "$file_changelog" --slug_number "$counter"
import os
import re
import fire
from dotenv import load_dotenv
load_dotenv()
def create_sdk_reference(
save_dir,
slug_number,
host_url=os.environ["README_HOST_URL"],
category=os.environ["README_CATEGORY"],
):
file_path = f"{save_dir}/{slug_number}_sdk_reference.md"
header = f"""---
title: "SDK Reference"
slug: "sdk_reference"
order: {slug_number}
type: "link"
link_url: "https://nixtla.mintlify.app/nixtla/timegpt.html"
link_external: true
category: {category}
---
"""
with open(file_path, "w", encoding="utf-8") as file:
file.write(header)
if __name__ == "__main__":
fire.Fire(create_sdk_reference)
import os
import re
from pathlib import Path
import requests
import fire
from dotenv import load_dotenv
load_dotenv()
def to_snake_case(s):
s = s.lower()
s = re.sub(r"(?<!^)(?=[A-Z])", "_", s).lower()
s = re.sub(r"\W", "_", s)
s = re.sub(r"_+", "_", s)
return s
def modify_markdown(
file_path,
slug_number=0,
host_url=os.environ["README_HOST_URL"],
category=os.environ["README_CATEGORY"],
api_key=os.environ["README_API_KEY"],
readme_version=os.environ["README_VERSION"],
):
with open(file_path, "r", encoding="utf-8") as file:
content = file.read()
dir_path = os.path.dirname(file_path)
if not dir_path.endswith("/"):
dir_path += "/"
# Extract and remove the first markdown header
pattern_header = re.compile(r"^#\s+(.*)\n+", re.MULTILINE)
match = pattern_header.search(content)
if match:
title = match.group(1)
content = pattern_header.sub("", content, count=1) # remove the first match
else:
title = "Something Amazing"
slug = to_snake_case(title)
# Get category id for this doc based on the parent folder name
url = "https://dash.readme.com/api/v1/categories"
headers = {"authorization": f"{api_key}",
"x-readme-version": f"{readme_version}"}
try:
response = requests.get(url, headers=headers)
categories = {category["slug"]:category["id"] for category in response.json()}
if Path(file_path).name == 'CHANGELOG.md':
category_slug = 'getting-started'
slug = category_slug + '-' + slug
else:
parent = Path(file_path).parents[0].name
grandparent = Path(file_path).parents[1].name
if grandparent == "docs":
category_slug = parent
slug = category_slug + '-' + slug
else:
category_slug = grandparent
subcategory = parent
slug = category_slug + '-' + subcategory + '-' + slug
category = categories[category_slug]
except:
pass
# Hide the unnecessary capabilities notebook for readme.com
if slug == 'capabilities-capabilities':
hidden = True
else:
hidden = False
# Prepare the new header
header = f"""---
title: "{title}"
slug: "{slug}"
order: {slug_number}
category: {category}
hidden: {hidden}
---
"""
# Remove parts delimited by ::: :::
pattern_delimited = re.compile(r":::.*?:::", re.DOTALL)
content = pattern_delimited.sub("", content)
# Modify image paths
content = content.replace("![figure](../../", f"![figure]({host_url}/nbs/")
pattern_image = re.compile(r"!\[\]\(((?!.*\.svg).*?)\)")
modified_content = pattern_image.sub(
r"![](" + host_url + dir_path + r"\1)", content
)
# Concatenate new header and modified content
final_content = header + modified_content
with open(file_path, "w", encoding="utf-8") as file:
file.write(final_content)
if __name__ == "__main__":
fire.Fire(modify_markdown)
<svg width="366" height="211" viewBox="0 0 366 211" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M28.364 207.934L179.581 55.9773C180.996 54.5556 183.291 54.5556 184.686 55.9773L257.15 129.41C258.565 130.832 260.84 130.832 262.255 129.41L362.939 28.2336C364.354 26.8119 364.354 24.5255 362.939 23.1037L341.009 1.06633C339.594 -0.355442 337.319 -0.355442 335.904 1.06633L262.847 74.4795C261.433 75.9013 259.157 75.9013 257.743 74.4795L184.686 1.10475C183.271 -0.317016 180.996 -0.317016 179.581 1.10475L106.525 74.518C105.11 75.9397 102.835 75.9397 101.42 74.518L28.364 1.10475C26.9492 -0.317016 24.6739 -0.317016 23.2591 1.10475L1.06114 23.4111C-0.353714 24.8329 -0.353714 27.1193 1.06114 28.541L74.1173 101.954C75.5322 103.376 75.5322 105.662 74.1173 107.084L1.06114 180.497C-0.353714 181.919 -0.353714 184.206 1.06114 185.627L23.2591 207.934C24.6739 209.355 26.9492 209.355 28.364 207.934Z" fill="#1F1F1F"/>
<path d="M246.444 145.37L247 144.81L222.464 120.073C221.045 118.642 218.764 118.642 217.346 120.073L184.95 152.733C183.532 154.163 181.251 154.163 179.832 152.733L147.437 120.073C146.018 118.642 143.737 118.642 142.319 120.073L120.064 142.51C118.645 143.94 118.645 146.24 120.064 147.67L179.832 207.927C181.251 209.358 183.532 209.358 184.95 207.927L246.732 145.641L246.463 145.37H246.444Z" fill="#1F1F1F"/>
<path d="M298.086 119.948L275.885 142.148C274.473 143.56 274.473 145.85 275.885 147.262L336.128 207.505C337.54 208.917 339.83 208.917 341.242 207.505L363.443 185.305C364.855 183.893 364.855 181.603 363.443 180.191L303.199 119.948C301.787 118.535 299.498 118.535 298.086 119.948Z" fill="#1F1F1F"/>
</svg>
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment