Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
The info cannot be fetched for the config 'dominick' of the dataset.
Error code:   InfoError
Exception:    ReadTimeout
Message:      (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=100.0)"), '(Request ID: 2c40165c-10be-4213-b242-566b1baf5035)')
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 211, in compute_first_rows_from_streaming_response
                  info = get_dataset_config_info(path=dataset, config_name=config, token=hf_token)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 478, in get_dataset_config_info
                  builder = load_dataset_builder(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 2266, in load_dataset_builder
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1914, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1834, in dataset_module_factory
                  dataset_info = hf_api.dataset_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2445, in dataset_info
                  r = get_session().get(path, headers=headers, timeout=timeout, params=params)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get
                  return self.request("GET", url, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
                  resp = self.send(prep, **send_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
                  r = adapter.send(request, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 66, in send
                  return super().send(request, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send
                  raise ReadTimeout(e, request=request)
              requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=100.0)"), '(Request ID: 2c40165c-10be-4213-b242-566b1baf5035)')

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Chronos datasets

Time series datasets used for training and evaluation of the Chronos forecasting models.

Note that some Chronos datasets (ETTh, ETTm, brazilian_cities_temperature and spanish_energy_and_weather) that rely on a custom builder script are available in the companion repo autogluon/chronos_datasets_extra.

See the paper for more information.

Data format and usage

All datasets satisfy the following high-level schema:

  • Each dataset row corresponds to a single (univariate or multivariate) time series.
  • There exists one column with name id and type string that contains the unique identifier of each time series.
  • There exists one column of type Sequence with dtype timestamp[ms]. This column contains the timestamps of the observations. Timestamps are guaranteed to have a regular frequency that can be obtained with pandas.infer_freq.
  • There exists at least one column of type Sequence with numeric (float, double, or int) dtype. These columns can be interpreted as target time series.
  • For each row, all columns of type Sequence have same length.
  • Remaining columns of types other than Sequence (e.g., string or float) can be interpreted as static covariates.

Datasets can be loaded using the 🤗 datasets library

import datasets

ds = datasets.load_dataset("autogluon/chronos_datasets", "m4_daily", split="train")
ds.set_format("numpy")  # sequences returned as numpy arrays

NOTE: The train split of all datasets contains the full time series and has no relation to the train/test split used in the Chronos paper.

Example entry in the m4_daily dataset

>>> ds[0]
{'id': 'T000000',
 'timestamp': array(['1994-03-01T12:00:00.000', '1994-03-02T12:00:00.000',
        '1994-03-03T12:00:00.000', ..., '1996-12-12T12:00:00.000',
        '1996-12-13T12:00:00.000', '1996-12-14T12:00:00.000'],
       dtype='datetime64[ms]'),
 'target': array([1017.1, 1019.3, 1017. , ..., 2071.4, 2083.8, 2080.6], dtype=float32),
 'category': 'Macro'}

Converting to pandas

We can easily convert data in such format to a long format data frame

def to_pandas(ds: datasets.Dataset) -> "pd.DataFrame":
    """Convert dataset to long data frame format."""
    sequence_columns = [col for col in ds.features if isinstance(ds.features[col], datasets.Sequence)]
    return ds.to_pandas().explode(sequence_columns).infer_objects()

Example output

>>> print(to_pandas(ds).head())
        id           timestamp  target category
0  T000000 1994-03-01 12:00:00  1017.1    Macro
1  T000000 1994-03-02 12:00:00  1019.3    Macro
2  T000000 1994-03-03 12:00:00  1017.0    Macro
3  T000000 1994-03-04 12:00:00  1019.2    Macro
4  T000000 1994-03-05 12:00:00  1018.7    Macro

Dealing with large datasets

Note that some datasets, such as subsets of WeatherBench, are extremely large (~100GB). To work with them efficiently, we recommend either loading them from disk (files will be downloaded to disk, but won't be all loaded into memory)

ds = datasets.load_dataset("autogluon/chronos_datasets", "weatherbench_daily", keep_in_memory=False, split="train")

or, for the largest datasets like weatherbench_hourly_temperature, reading them in streaming format (chunks will be downloaded one at a time)

ds = datasets.load_dataset("autogluon/chronos_datasets", "weatherbench_hourly_temperature", streaming=True, split="train")

Chronos training corpus with TSMixup & KernelSynth

The training corpus used for training the Chronos models can be loaded via the configs training_corpus_tsmixup_10m (10M TSMixup augmentations of real-world data) and training_corpus_kernel_synth_1m (1M synthetic time series generated with KernelSynth), e.g.,

ds = datasets.load_dataset("autogluon/chronos_datasets", "training_corpus_tsmixup_10m", streaming=True, split="train")

Note that since data in the training corpus was obtained by combining various synthetic & real-world time series, the timestamps contain dummy values that have no connection to the original data.

License

Different datasets available in this collection are distributed under different open source licenses. Please see ds.info.license and ds.info.homepage for each individual dataset.

Citation

If you find these datasets useful for your research, please consider citing the associated paper:

@article{ansari2024chronos,
  author  = {Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Wang, Hao and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang},
  title   = {Chronos: Learning the Language of Time Series},
  journal = {arXiv preprint arXiv:2403.07815},
  year    = {2024}
}
Downloads last month
68