Commit efb99f11 authored by suily's avatar suily
Browse files

Initial commit

parents
Pipeline #1482 canceled with stages
{
"python.pythonPath": "/usr/bin/python3"
}
\ No newline at end of file
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
---
license: apache-2.0
library_name: timesfm
pipeline_tag: time-series-forecasting
---
# TimesFM
TimesFM (Time Series Foundation Model) is a pretrained time-series foundation model developed by Google Research for time-series forecasting.
**Resources and Technical Documentation**:
* Paper: [A decoder-only foundation model for time-series forecasting](https://arxiv.org/abs/2310.10688), to appear in ICML 2024.
* [Google Research blog](https://research.google/blog/a-decoder-only-foundation-model-for-time-series-forecasting/)
* [GitHub repo](https://github.com/google-research/timesfm)
**Authors**: Google Research
This is not an officially supported Google product.
## Checkpoint timesfm-1.0-200m
`timesfm-1.0-200m` is the first open model checkpoint:
- It performs univariate time series forecasting for context lengths up to 512 time points and any horizon lengths, with an optional frequency indicator.
- It focuses on point forecasts and does not support probabilistic forecasts. We experimentally offer quantile heads but they have not been calibrated after pretraining.
- It requires the context to be contiguous (i.e. no "holes"), and the context and the horizon to be of the same frequency.
## Benchmarks
Please refer to our result tables on the [extended benchmarks](https://github.com/google-research/timesfm/blob/master/experiments/extended_benchmarks/tfm_results.png) and the [long horizon benchmarks](https://github.com/google-research/timesfm/blob/master/experiments/long_horizon_benchmarks/tfm_long_horizon.png).
Please look into the README files in the respective benchmark directories within `experiments/` for instructions for running TimesFM on the respective benchmarks.
## Installation
This HuggingFace repo hosts TimesFm checkpoints. Please visit our [GitHub repo](https://github.com/google-research/timesfm) and follow the instructions there to install the `timesfm` library for model inference.
In particular, the dependency `lingvo` does not support ARM architectures and the inference code is not working for machines with Apple silicon. We are aware of this issue and are working on a solution. Stay tuned.
## Usage
### Initialize the model and load a checkpoint.
Then the base class can be loaded as,
```python
import timesfm
tfm = timesfm.TimesFm(
context_len=<context>,
horizon_len=<horizon>,
input_patch_len=32,
output_patch_len=128,
num_layers=20,
model_dims=1280,
backend=<backend>,
)
tfm.load_from_checkpoint(repo_id="google/timesfm-1.0-200m")
```
Note that the four parameters are fixed to load the 200m model
```python
input_patch_len=32,
output_patch_len=128,
num_layers=20,
model_dims=1280,
```
1. The context_len here can be set as the max context length **of the model**. You can provide a shorter series to the `tfm.forecast()` function and the model will handle it. Currently, the model handles a max context length of 512, which can be increased in later releases. The input time series can have **any context length**. Padding / truncation will be handled by the inference code if needed.
2. The horizon length can be set to anything. We recommend setting it to the largest horizon length you would need in the forecasting tasks for your application. We generally recommend horizon length <= context length but it is not a requirement in the function call.
### Perform inference
We provide APIs to forecast from either array inputs or `pandas` dataframe. Both forecast methods expect (1) the input time series contexts, (2) along with their frequencies. Please look at the documentation of the functions `tfm.forecast()` and `tfm.forecast_on_df()` for detailed instructions.
In particular, regarding the frequency, TimesFM expects a categorical indicator valued in {0, 1, 2}:
- **0** (default): high frequency, long horizon time series. We recommend using this for time series up to daily granularity.
- **1**: medium frequency time series. We recommend using this for weekly and monthly data.
- **2**: low frequency, short horizon time series. We recommend using this for anything beyond monthly, e.g. quarterly or yearly.
This categorical value should be directly provided with the array inputs. For dataframe inputs, we convert the conventional letter coding of frequencies to our expected categories, that
- **0**: T, MIN, H, D, B, U
- **1**: W, M
- **2**: Q, Y
Notice you do **NOT** have to strictly follow our recommendation here. Although this is our setup during model training and we expect it to offer the best forecast result, you can also view the frequency input as a free parameter and modify it per your specific use case.
Examples:
Array inputs, with the frequencies set to low, medium, and high respectively.
```python
import numpy as np
forecast_input = [
np.sin(np.linspace(0, 20, 100))
np.sin(np.linspace(0, 20, 200)),
np.sin(np.linspace(0, 20, 400)),
]
frequency_input = [0, 1, 2]
point_forecast, experimental_quantile_forecast = tfm.forecast(
forecast_input,
freq=frequency_input,
)
```
`pandas` dataframe, with the frequency set to "M" monthly.
```python
import pandas as pd
# e.g. input_df is
# unique_id ds y
# 0 T1 1975-12-31 697458.0
# 1 T1 1976-01-31 1187650.0
# 2 T1 1976-02-29 1069690.0
# 3 T1 1976-03-31 1078430.0
# 4 T1 1976-04-30 1059910.0
# ... ... ... ...
# 8175 T99 1986-01-31 602.0
# 8176 T99 1986-02-28 684.0
# 8177 T99 1986-03-31 818.0
# 8178 T99 1986-04-30 836.0
# 8179 T99 1986-05-31 878.0
forecast_df = tfm.forecast_on_df(
inputs=input_df,
freq="M", # monthly
value_name="y",
num_jobs=-1,
)
```
\ No newline at end of file
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""TimesFM init file."""
from __future__ import absolute_import
from .src.patched_decoder import PatchedTimeSeriesDecoder
from .src.timesfm import TimesFm
from .src.timesfm import freq_map
# Copyright 2024 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/bin/bash
gdown --fuzzy https://drive.google.com/file/d/1alE33S1GmP5wACMXaLu50rDIoVzBM4ik/view?usp=share_link
unzip all_six_datasets.zip
mv all_six_datasets/* .
rm -rf all_six_datasets*
\ No newline at end of file
# How to Contribute
We would love to accept your patches and contributions to this project.
## Before you begin
### Sign our Contributor License Agreement
Contributions to this project must be accompanied by a
[Contributor License Agreement](https://cla.developers.google.com/about) (CLA).
You (or your employer) retain the copyright to your contribution; this simply
gives us permission to use and redistribute your contributions as part of the
project.
If you or your current employer have already signed the Google CLA (even if it
was for a different project), you probably don't need to do it again.
Visit <https://cla.developers.google.com/> to see your current agreements or to
sign a new one.
### Review our Community Guidelines
This project follows [Google's Open Source Community
Guidelines](https://opensource.google/conduct/).
## Contribution process
### Code Reviews
All submissions, including submissions by project members, require review. We
use [GitHub pull requests](https://docs.github.com/articles/about-pull-requests)
for this purpose.
name: tfm_env
channels:
- conda-forge
- defaults
- anaconda
dependencies:
- jupyterlab
- pip
- python=3.10
- pip:
- huggingface_hub[cli]
- utilsforecast
- praxis
- paxml
- jax[cuda12]==0.4.26
- einshape
name: tfm_env
channels:
- conda-forge
- defaults
- anaconda
dependencies:
- jupyterlab
- pip
- python=3.10
- pip:
- huggingface_hub[cli]
- utilsforecast
- praxis
- paxml
- jax[cpu]==0.4.26
- einshape
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
\ No newline at end of file
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import List, Optional, Tuple
import os
import pandas as pd
from gluonts.time_feature.seasonality import get_seasonality as _get_seasonality
from tqdm import tqdm
from utilsforecast.processing import (
backtest_splits,
drop_index_if_pandas,
join,
maybe_compute_sort_indices,
take_rows,
vertical_concat,
)
from time import time
from dotenv import load_dotenv
from nixtla import NixtlaClient
def get_seasonality(freq: str) -> int:
return _get_seasonality(freq, seasonalities={"D": 7})
def maybe_convert_col_to_datetime(df: pd.DataFrame, col_name: str) -> pd.DataFrame:
if not pd.api.types.is_datetime64_any_dtype(df[col_name]):
df = df.copy()
df[col_name] = pd.to_datetime(df[col_name])
return df
def zero_pad_time_series(df, freq, min_length=36):
"""If time_series length is less than min_length, front pad it with zeros."""
# 1. Calculate required padding for each unique_id
value_counts = df["unique_id"].value_counts()
to_pad = value_counts[value_counts < min_length].index
# 2. Create a new DataFrame to hold padded data
padded_data = []
for unique_id in to_pad:
# 2a. Filter data for the specific unique_id
subset = df[df["unique_id"] == unique_id]
if len(subset) > min_length:
padded_data.append(subset)
else:
# 2b. Determine earliest date and calculate padding dates
start_date = subset["ds"].min()
padding_dates = pd.date_range(
end=start_date,
periods=min_length - len(subset) + 1,
freq=freq, # 'MS' for month start
)[
:-1
] # Exclude the start_date itself
# 2c. Create padding data
padding_df = pd.DataFrame(
{"ds": padding_dates, "unique_id": unique_id, "y": 0} # Zero padding
)
# 2d. Combine original and padding data, and append to the list
padded_data.append(pd.concat([padding_df, subset]).sort_values("ds"))
# 3. Combine all padded data and original data (unchanged)
result_df = pd.concat(padded_data + [df[~df["unique_id"].isin(to_pad)]])
return result_df
class Forecaster:
"""Borrowed from
https://github.com/Nixtla/nixtla/tree/main/experiments/foundation-time-series-arena/xiuhmolpilli/models.
"""
def forecast(
self,
df: pd.DataFrame,
h: int,
freq: str,
) -> pd.DataFrame:
raise NotImplementedError
def cross_validation(
self,
df: pd.DataFrame,
h: int,
freq: str,
n_windows: int = 1,
step_size: int | None = None,
) -> pd.DataFrame:
df = maybe_convert_col_to_datetime(df, "ds")
# mlforecast cv code
results = []
sort_idxs = maybe_compute_sort_indices(df, "unique_id", "ds")
if sort_idxs is not None:
df = take_rows(df, sort_idxs)
splits = backtest_splits(
df,
n_windows=n_windows,
h=h,
id_col="unique_id",
time_col="ds",
freq=pd.tseries.frequencies.to_offset(freq),
step_size=h if step_size is None else step_size,
)
for _, (cutoffs, train, valid) in tqdm(enumerate(splits)):
if len(valid.columns) > 3:
raise NotImplementedError(
"Cross validation with exogenous variables is not yet supported."
)
y_pred = self.forecast(
df=train,
h=h,
freq=freq,
)
y_pred = join(y_pred, cutoffs, on="unique_id", how="left")
result = join(
valid[["unique_id", "ds", "y"]],
y_pred,
on=["unique_id", "ds"],
)
if result.shape[0] < valid.shape[0]:
raise ValueError(
"Cross validation result produced less results than expected. "
"Please verify that the frequency parameter (freq) matches your series' "
"and that there aren't any missing periods."
)
results.append(result)
out = vertical_concat(results)
out = drop_index_if_pandas(out)
first_out_cols = ["unique_id", "ds", "cutoff", "y"]
remaining_cols = [c for c in out.columns if c not in first_out_cols]
fcst_cv_df = out[first_out_cols + remaining_cols]
return fcst_cv_df
class TimeGPT(Forecaster):
"""Borrowed from
https://github.com/Nixtla/nixtla/tree/main/experiments/foundation-time-series-arena/xiuhmolpilli/models.
We modify the class to take care of edge cases.
"""
def __init__(
self,
api_key: str | None = None,
base_url: Optional[str] = None,
max_retries: int = 1,
model: str = "timegpt-1",
alias: str = "TimeGPT",
):
self.api_key = api_key
self.base_url = base_url
self.max_retries = max_retries
self.model = model
self.alias = alias
def _get_client(self) -> NixtlaClient:
if self.api_key is None:
api_key = os.environ["NIXTLA_API_KEY"]
else:
api_key = self.api_key
return NixtlaClient(
api_key=api_key,
base_url=self.base_url,
max_retries=self.max_retries,
)
def forecast(
self,
df: pd.DataFrame,
h: int,
freq: str,
level: List = [90.0],
chunk_size: Optional[int] = None,
) -> pd.DataFrame:
client = self._get_client()
fcst_df = None
if chunk_size is None:
fcst_df = client.forecast(
df=df,
h=h,
freq=freq,
level=level,
model=self.model,
)
else:
all_unique_ids = df["unique_id"].unique()
all_fcst_df = []
for i in range(0, len(all_unique_ids), chunk_size):
chunk_ids = all_unique_ids[i : i + chunk_size]
chunk_df = df[df["unique_id"].isin(chunk_ids)]
fct_chunk_df = client.forecast(
df=chunk_df,
h=h,
freq=freq,
level=level,
)
all_fcst_df.append(fct_chunk_df)
fcst_df = pd.concat(all_fcst_df)
fcst_df["ds"] = pd.to_datetime(fcst_df["ds"])
replace_dict = {}
for col in fcst_df.columns:
if col.startswith("TimeGPT"):
replace_dict[col] = col.replace("TimeGPT", self.alias)
fcst_df = fcst_df.rename(columns=replace_dict)
return fcst_df
def run_timegpt(
train_df: pd.DataFrame,
horizon: int,
freq: str,
seasonality: int,
level: List[int],
dataset: str,
model: str = "timegpt-1",
) -> Tuple[pd.DataFrame, float, str]:
os.environ["NIXTLA_ID_AS_COL"] = "true"
model = TimeGPT("nixtla-tok-uV1RLF3oVWJ8R2Bq1eqqiCxWRceUKVr751OpiM9AGdhQkRZfjgL0ceG1TYVE34wCXjdAbq41ZES0mIZy", model="timegpt-1", alias=model) # #TODO:添加密匙
padded_train_df = zero_pad_time_series(train_df, freq)
init_time = time()
# For these datasets the API fails if we do not chunk.
if dataset in ["m5", "m4_quarterly"]:
chunk_size = 5000
else:
chunk_size = None
fcsts_df = model.forecast(
df=padded_train_df, h=horizon, level=level, freq=freq, chunk_size=chunk_size
)
total_time = time() - init_time
# In case levels are not returned we replace the levels with the mean predictions.
# Note that this does not affect the results table as we only compare on point
# forecastign metrics.
for lvl in level:
if f"{model.alias}-lo-{lvl}" not in fcsts_df.columns:
fcsts_df[f"{model.alias}-lo-{lvl}"] = fcsts_df[model.alias]
if f"{model.alias}-hi-{lvl}" not in fcsts_df.columns:
fcsts_df[f"{model.alias}-hi-{lvl}"] = fcsts_df[model.alias]
return fcsts_df, total_time, model.alias
name: tfm_env
channels:
- conda-forge
- defaults
- anaconda
dependencies:
- jupyterlab
- pip
- python=3.10
- pip:
- datasetsforecast
- fire
- git+https://github.com/awslabs/gluon-ts.git
- huggingface_hub[cli]
- neuralforecast
- orjson
- statsforecast
- utilsforecast
- git+https://github.com/amazon-science/chronos-forecasting.git
- praxis
- paxml
- jax[cuda12]==0.4.26
- einshape
- python-dotenv
- nixtla>=0.5.1
- rich
name: tfm_env
channels:
- conda-forge
- defaults
- anaconda
dependencies:
- jupyterlab
- pip
- python=3.10
- pip:
- datasetsforecast
- fire
- git+https://github.com/awslabs/gluon-ts.git
- huggingface_hub[cli]
- neuralforecast
- orjson
- statsforecast
- utilsforecast
- git+https://github.com/amazon-science/chronos-forecasting.git
- praxis
- paxml
- jax[cpu]==0.4.26
- einshape
- python-dotenv
- nixtla>=0.5.1
- rich
# Extended Benchmarks
The benchmark setting has been borrowed from Nixtla's original [benchmarking](https://github.com/AzulGarza/nixtla/tree/main/experiments/amazon-chronos) of time-series foundation models against a strong statistical ensemble. Later more datasets were added by the Chronos team in this [pull request](https://github.com/shchur/nixtla/tree/chronos-full-eval/experiments/amazon-chronos). We compare on all the datasets in this extended benchmarks.
## Running TimesFM on the benchmark
Install the environment and the package as detailed in the main README and then follow the steps from the base directory.
```
conda activate tfm_env
TF_CPP_MIN_LOG_LEVEL=2 XLA_PYTHON_CLIENT_PREALLOCATE=false python3 -m experiments.extended_benchmarks.run_timesfm --model_path=<model_path> --backend="gpu"
```
In the above, `<model_path>` should point to the checkpoint directory that can be downloaded from HuggingFace.
Note: In the current version of TimesFM we focus on point forecasts and therefore the mase, smape have been calculated using the quantile head corresponding to the median i.e 0.5 quantile. We do offer 10 quantile heads but they have not been calibrated after pretraining. We recommend using them with caution or calibrate/conformalize them on a hold out for your applications. More to follow on later versions.
## Benchmark Results
![Benchmark Results Table](./tfm_extended_new.png)
__Update:__ We have added TimeGPT-1 to the benchmark results. We had to remove the Dominick dataset as we were not able to run TimeGPT-1 on this benchmark. Note that the previous results including Dominick remain available at `./tfm_results.png`. In order to reproduce the results for TimeGPT-1, please run `run_timegpt.py`.
_Remark:_ All baselines except the ones involving TimeGPT were run performed on a [g2-standard-32](https://cloud.google.com/compute/docs/gpus). Since TimeGPT-1 can only be accessed by an API, the time column might not reflect the true speed of the model as it also includes the communication cost. Moreover, we are not sure about the exact backend hardware for TimeGPT.
We can see that TimesFM performs the best in terms of both mase and smape. More importantly it is much faster than the other methods, in particular it is more than 600x faster than StatisticalEnsemble and 80x faster than Chronos (Large).
Note: This benchmark only compares on `one` small horizon window for long horizon datasets like ETT hourly and 15 minutes. More in depth comparison on longer horizon rolling validation tasks are presented in our long horizon benchmarks.
\ No newline at end of file
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Evaluation script for timegpt."""
import os
import sys
import time
from absl import flags
import numpy as np
import pandas as pd
from experiments.baselines.timegpt_pipeline import run_timegpt #TODO:路径
from utils import ExperimentHandler #TODO:路径
dataset_names = [
"m1_monthly",
"m1_quarterly",
"m1_yearly",
"m3_monthly",
"m3_other",
"m3_quarterly",
"m3_yearly",
"m4_quarterly",
"m4_yearly",
"tourism_monthly",
"tourism_quarterly",
"tourism_yearly",
"nn5_daily_without_missing",
"m5",
"nn5_weekly",
"traffic",
"weather",
"australian_electricity_demand",
"car_parts_without_missing",
"cif_2016",
"covid_deaths",
"ercot",
"ett_small_15min",
"ett_small_1h",
"exchange_rate",
"fred_md",
"hospital",
]
_MODEL_NAME = flags.DEFINE_string(
"model_name",
"timegpt-1-long-horizon",
"Path to model, can also be set to timegpt-1",
)
_SAVE_DIR = flags.DEFINE_string("save_dir", "./results", "Save directory")
QUANTILES = list(np.arange(1, 10) / 10.0)
def main():
results_list = []
run_id = np.random.randint(100000)
model_name = _MODEL_NAME.value
for dataset in dataset_names:
print(f"Evaluating model {model_name} on dataset {dataset}", flush=True)
exp = ExperimentHandler(dataset, quantiles=QUANTILES)
train_df = exp.train_df
horizon = exp.horizon
seasonality = exp.seasonality
freq = exp.freq
level = exp.level
fcsts_df, total_time, model_name = run_timegpt(
train_df=train_df,
horizon=exp.horizon,
model=model_name,
seasonality=seasonality,
freq=freq,
dataset=dataset,
level=level,
)
time_df = pd.DataFrame({"time": [total_time], "model": model_name})
fcsts_df = exp.fcst_from_level_to_quantiles(fcsts_df, model_name)
results = exp.evaluate_from_predictions(
models=[model_name], fcsts_df=fcsts_df, times_df=time_df
)
print(results, flush=True)
results_list.append(results)
results_full = pd.concat(results_list)
save_path = os.path.join(_SAVE_DIR.value, str(run_id))
print(f"Saving results to {save_path}", flush=True)
os.makedirs(save_path, exist_ok=True)
results_full.to_csv(f"{save_path}/results.csv")
if __name__ == "__main__":
FLAGS = flags.FLAGS
FLAGS(sys.argv)
main()
# Copyright 2024 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Evaluation script for timesfm."""
import os
import sys
import time
from absl import flags
import numpy as np
import pandas as pd
from paxml import checkpoints
sys.path.append(os.getcwd())
from src import timesfm #TODO:路径
from experiments.extended_benchmarks.utils import ExperimentHandler
from jax.lib import xla_bridge
# sugon测试
dataset_names = [ # context_len=512
"ett_small_15min",
# "traffic",
# "m3_quarterly",
# "m3_yearly",
"tourism_yearly",
]
# dataset_names = [
# "m1_monthly",
# "m1_quarterly",
# "m1_yearly",
# "m3_monthly",
# "m3_other",
# "m3_quarterly",
# "m3_yearly",
# "m4_quarterly",
# "m4_yearly",
# "tourism_monthly",
# "tourism_quarterly",
# "tourism_yearly",
# "nn5_daily_without_missing",
# "m5",
# "nn5_weekly",
# "traffic",
# "weather",
# "australian_electricity_demand",
# "car_parts_without_missing",
# "cif_2016",
# "covid_deaths",
# "ercot",
# "ett_small_15min",
# "ett_small_1h",
# "exchange_rate",
# "fred_md",
# "hospital",
# ]
context_dict = {
"cif_2016": 32,
"tourism_yearly": 64,
"covid_deaths": 64,
"tourism_quarterly": 64,
"tourism_monthly": 64,
"m1_monthly": 64,
"m1_quarterly": 64,
"m1_yearly": 64,
"m3_monthly": 64,
"m3_other": 64,
"m3_quarterly": 64,
"m3_yearly": 64,
"m4_quarterly": 64,
"m4_yearly": 64,
}
# TODO:模型位置
_MODEL_PATH = flags.DEFINE_string(
"model_path", "model/checkpoints", "Path to model"
)
#TODO: 参数调整
_BATCH_SIZE = flags.DEFINE_integer("batch_size", 64, "Batch size")
_HORIZON = flags.DEFINE_integer("horizon", 128, "Horizon")
_BACKEND = flags.DEFINE_string("backend", "gpu", "Backend")
_NUM_JOBS = flags.DEFINE_integer("num_jobs", 1, "Number of jobs")
_SAVE_DIR = flags.DEFINE_string("save_dir", "./results", "Save directory")
QUANTILES = list(np.arange(1, 10) / 10.0)
def main():
results_list = []
tfm = timesfm.TimesFm(
context_len=512,
horizon_len=_HORIZON.value,
input_patch_len=32,
output_patch_len=128,
num_layers=20,
model_dims=1280,
backend=_BACKEND.value,
per_core_batch_size=_BATCH_SIZE.value,
quantiles=QUANTILES,
)
tfm.load_from_checkpoint( # 检查点,加载模型
_MODEL_PATH.value,
checkpoint_type=checkpoints.CheckpointType.FLAX,
)
run_id = np.random.randint(100000)
model_name = "timesfm"
for dataset in dataset_names:
print(f"Evaluating model {model_name} on dataset {dataset}", flush=True)
exp = ExperimentHandler(dataset, quantiles=QUANTILES)
if dataset in context_dict:
context_len = context_dict[dataset]
else:
context_len = 512
train_df = exp.train_df
freq = exp.freq
init_time = time.time()
fcsts_df = tfm.forecast_on_df(
inputs=train_df,
freq=freq,
value_name="y",
model_name=model_name,
forecast_context_len=context_len,
num_jobs=_NUM_JOBS.value,
)
total_time = time.time() - init_time
time_df = pd.DataFrame({"time": [total_time], "model": model_name})
results = exp.evaluate_from_predictions(
models=[model_name], fcsts_df=fcsts_df, times_df=time_df
)
print(results, flush=True)
results_list.append(results)
results_full = pd.concat(results_list)
save_path = os.path.join(_SAVE_DIR.value, str(run_id))
print(f"Saving results to {save_path}", flush=True)
os.makedirs(save_path, exist_ok=True)
results_full.to_csv(f"{save_path}/results.csv")
if __name__ == "__main__":
# # debug1-测试torch-gpu\jax-gpu\TensorFlow-gpu
jax_test=xla_bridge.get_backend().platform
print(jax_test)
if not (jax_test=='gpu'):
exit()
FLAGS = flags.FLAGS
FLAGS(sys.argv)
main()
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment