Datasets:

Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
UTSD / README.md
Yong99's picture
Update README.md
0e62891 verified
|
raw
history blame
9.14 kB
---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: '*/*.arrow'
- config_name: UTSD-1G
data_files:
- split: train
path: UTSD-1G/*.arrow
- config_name: UTSD-2G
data_files:
- split: train
path: UTSD-2G/*.arrow
- config_name: UTSD-4G
data_files:
- split: train
path: UTSD-4G/*.arrow
- config_name: UTSD-12G
data_files:
- split: train
path: UTSD-12G/*.arrow
task_categories:
- time-series-forecasting
tags:
- time series forecasting
- time series analysis
- time series
pretty_name: UTSD
size_categories:
- 100M<n<1B
---
# Unified Time Series Dataset (UTSD)
## Updates
:triangular_flag_on_post: **News** (2024.10) We release the numpy format of [UTSD](https://cloud.tsinghua.edu.cn/f/93868e3a9fb144fe9719/). An easier and more efficient dataloader can be found [here](https://github.com/thuml/OpenLTM/blob/main/data_provider/data_loader.py).
## Introduction
We curate **Unified Time Series Dataset (UTSD)** that includes **7 domains** with up to **1 billion time points** with hierarchical **four volumes** to facilitate research of large models and pre-training in the field of time series.
<p align="center">
<img src="./figures/utsd.png" alt="" align=center />
</p>
**Unified Time Series Dataset (UTSD)** is meticulously assembled from a blend of publicly accessible online data repositories and empirical data derived from real-world machine operations.
All datasets are classified into seven distinct domains by their source: **Energy, Environment, Health, Internet of Things (IoT), Nature, Transportation, and Web** with diverse sampling frequencies.
See the [paper](https://arxiv.org/abs/2402.02368) and [codebase](https://github.com/thuml/Large-Time-Series-Model) for more information.
## Dataset detailed descriptions.
We analyze each dataset, examining the time series through the lenses of stationarity and forecastability to allow us to characterize the level of complexity inherent to each dataset.
| Domain | Dataset | Time Points | File Size | Freq. | ADF. | Forecast. | Source |
|--------------|----------------------------------|-------------|-----------|-------|---------|-----------|-------------------------------------------|
| Energy | London Smart Meters | 166.50M | 4120M | Hourly| -13.158 | 0.173 | [1] |
| Energy | Wind Farms | 7.40M | 179M | 4 sec | -29.174 | 0.811 | [1] |
| Energy | Aus. Electricity Demand | 1.16M | 35M | 30 min| -27.554 | 0.730 | [1] |
| Environment | AustraliaRainfall | 11.54M | 54M | Hourly| -150.10 | 0.458 | [2] |
| Environment | BeijingPM25Quality | 3.66M | 26M | Hourly| -31.415 | 0.404 | [2] |
| Environment | BenzeneConcentration | 16.34M | 206M | Hourly| -65.187 | 0.526 | [2] |
| Health | MotorImagery | 72.58M | 514M | 0.001 sec| -3.132 | 0.449 | [3] |
| Health | SelfRegulationSCP1 | 3.02M | 18M | 0.004 sec| -3.191 | 0.504 | [3] |
| Health | SelfRegulationSCP2 | 3.06M | 18M | 0.004 sec| -2.715 | 0.481 | [3] |
| Health | AtrialFibrillation | 0.04M | 1M | 0.008 sec| -7.061 | 0.167 | [3] |
| Health | PigArtPressure | 0.62M | 7M | - | -7.649 | 0.739 | [3] |
| Health | PigCVP | 0.62M | 7M | - | -4.855 | 0.577 | [3] |
| Health | IEEEPPG | 15.48M | 136M | 0.008 sec| -7.725 | 0.380 | [2] |
| Health | BIDMC32HR | 63.59M | 651M | - | -14.135 | 0.523 | [2] |
| Health | TDBrain | 72.30M | 1333M | 0.002 sec| -3.167 | 0.967 | [5] |
| IoT | SensorData | 165.4M | 2067M | 0.02 sec| -15.892 | 0.917 | Real-world machine logs |
| Nature | Phoneme | 2.16M | 25M | - | -8.506 | 0.243 | [3] |
| Nature | EigenWorms | 27.95M | 252M | - | -12.201 | 0.393 | [3] |
| Nature | ERA5 Surface | 58.44M | 574M | 3 h | -28.263 | 0.493 | [4] |
| Nature | ERA5 Pressure | 116.88M | 1083M | 3h | -22.001 | 0.853 | [4] |
| Nature | Temperature Rain | 23.25M | 109M | Daily | -10.952 | 0.133 | [1] |
| Nature | StarLightCurves | 9.46M | 109M | - | -1.891 | 0.555 | [3] |
| Nature | Saugen River Flow | 0.02M | 1M | Daily | -19.305 | 0.300 | [1] |
| Nature | KDD Cup 2018 | 2.94M | 67M | Hourly | -10.107 | 0.362 | [1] |
| Nature | US Births | 0.00M | 1M | Daily | -3.352 | 0.675 | [1] |
| Nature | Sunspot | 0.07M | 2M | Daily | -7.866 | 0.287 | [1] |
| Nature | Worms | 0.23M | 4M | 0.033 sec| -3.851 | 0.395 | [3] |
| Transport | Pedestrian Counts | 3.13M | 72M | Hourly | -23.462 | 0.297 | [1] |
| Web | Web Traffic | 116.49M | 388M | Daily | -8.272 | 0.299 | [1] |
You can find the specific source address in `source.csv`.
[1]: [Monash Time Series Forecasting Archive](https://arxiv.org/abs/2105.06643)
[2]: [Time series extrinsic regression Predicting numeric values from time series data](https://link.springer.com/article/10.1007/s10618-021-00745-9)
[3]: [The UCR Time Series Archive](https://arxiv.org/abs/1810.07758)
[4]: [ERA5-Land: a state-of-the-art global reanalysis dataset for land applications](https://essd.copernicus.org/articles/13/4349/2021/)
[5]: [Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-Series](https://arxiv.org/abs/2310.14017)
## Hierarchy of Datasets
UTSD is constructed with hierarchical capacities, namely **UTSD-1G, UTSD-2G, UTSD-4G, and UTSD-12G**, where each smaller dataset is a subset of the larger ones. A larger subset means greater data **difficulty** and **diversity**, allowing you to conduct detailed scaling experiments.
<p align="center">
<img src="./figures/utsd_complexity.png" alt="" align=center />
</p>
## Usage
You can access and load UTSD based on [the code in our repo](https://github.com/thuml/Large-Time-Series-Model/tree/main/scripts/UTSD).
```bash
# huggingface-cli login
# export HF_ENDPOINT=https://hf-mirror.com
python ./scripts/UTSD/download_dataset.py
# dataloader
python ./scripts/UTSD/utsdataset.py
```
It should be noted that due to the construction of our dataset with diverse lengths, the sequence lengths of different samples vary. You can construct the data organization logic according to your own needs.
In addition, we provide code `dataset_evaluation.py` for evaluating time series datasets, which you can use to evaluate your Huggingface formatted dataset. The usage of this script is as follows:
```bash
python dataset_evaluation.py --root_path <dataset root path> --log_path <output log path>
```
## Acknowledgments
UTSD is mostly built from the Internet public time series dataset, which comes from different research teams and providers. We sincerely thank all individuals and organizations who have contributed the data. Without their generous sharing, this dataset would not have existed.
## Citation
If you're using UTSD in your research or applications, please cite it using this BibTeX:
**BibTeX:**
```markdown
@inproceedings{liutimer,
title={Timer: Generative Pre-trained Transformers Are Large Time Series Models},
author={Liu, Yong and Zhang, Haoran and Li, Chenyu and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
booktitle={Forty-first International Conference on Machine Learning}
}
```