Datasets:

Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
File size: 8,618 Bytes
04ac939
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
da19d37
04ac939
 
 
 
 
 
 
 
 
 
 
 
 
da19d37
04ac939
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f59beeb
 
 
 
04ac939
 
 
 
 
 
 
 
 
 
 
 
 
 
97b9177
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
---
license: apache-2.0
configs:
- config_name: default
  data_files:
  - split: train
    path: "*/*.arrow"
- config_name: "UTSD-1G"
  data_files:
  - split: train
    path: "UTSD-1G/*.arrow"
- config_name: "UTSD-2G"
  data_files:
  - split: train
    path: "UTSD-2G/*.arrow"
- config_name: "UTSD-4G"
  data_files:
  - split: train
    path: "UTSD-4G/*.arrow"
- config_name: "UTSD-12G"
  data_files:
  - split: train
    path: "UTSD-12G/*.arrow"
---

# Unified Time Series Dataset (UTSD)

## Introduction

We curate **Unified Time Series Dataset (UTSD)** that includes **7 domains** with up to **1 billion time points** with hierarchical **four volumes** to facilitate research of large models and pre-training in the field of time series.

<p align="center">
<img src="./figures/utsd.png" alt="" align=center />
</p>

**Unified Time Series Dataset (UTSD)** is meticulously assembled from a blend of publicly accessible online data repositories and empirical data derived from real-world machine operations.

All datasets are classified into seven distinct domains by their source: **Energy, Environment, Health, Internet of Things (IoT), Nature, Transportation, and Web** with diverse sampling frequencies.

See the [paper](https://arxiv.org/abs/2402.02368) and [codebase](https://github.com/thuml/Large-Time-Series-Model) for more information.

## Dataset detailed descriptions.

We analyze each dataset, examining the time series through the lenses of stationarity and forecastability to allow us to characterize the level of complexity inherent to each dataset.

| Domain       | Dataset                         | Time Points | File Size | Freq. | ADF.    | Forecast. | Source                                    |
|--------------|----------------------------------|-------------|-----------|-------|---------|-----------|-------------------------------------------|
| Energy       | London Smart Meters             | 166.50M     | 4120M     | Hourly| -13.158 | 0.173    | [1]                                       |
| Energy | Wind Farms                      | 7.40M       | 179M      | 4 sec | -29.174 | 0.811    | [1]                                       |
| Energy | Aus. Electricity Demand         | 1.16M       | 35M       | 30 min| -27.554 | 0.730    | [1]                                       |
| Environment  | AustraliaRainfall               | 11.54M      | 54M       | Hourly| -150.10 | 0.458    | [2]                                       |
| Environment | BeijingPM25Quality              | 3.66M       | 26M       | Hourly| -31.415 | 0.404    | [2]                                       |
| Environment | BenzeneConcentration            | 16.34M      | 206M      | Hourly| -65.187 | 0.526    | [2]                                       |
| Health       | MotorImagery                   | 72.58M      | 514M      | 0.001 sec| -3.132 | 0.449    | [3]                                       |
| Health | SelfRegulationSCP1             | 3.02M       | 18M       | 0.004 sec| -3.191 | 0.504    | [3]                                       |
| Health | SelfRegulationSCP2             | 3.06M       | 18M       | 0.004 sec| -2.715 | 0.481    | [3]                                       |
| Health | AtrialFibrillation             | 0.04M       | 1M        | 0.008 sec| -7.061 | 0.167    | [3]                                       |
| Health | PigArtPressure                  | 0.62M       | 7M        | -      | -7.649  | 0.739    | [3]                                       |
| Health | PigCVP                          | 0.62M       | 7M        | -      | -4.855  | 0.577    | [3]                                       |
| Health | IEEEPPG                         | 15.48M      | 136M      | 0.008 sec| -7.725 | 0.380    | [2]                                       |
| Health | BIDMC32HR                       | 63.59M      | 651M      | -      | -14.135 | 0.523    | [2]                                       |
| Health | TDBrain                         | 72.30M      | 1333M     | 0.002 sec| -3.167 | 0.967    | [5]                                       |
| IoT          | SensorData                      | 165.4M      | 2067M     | 0.02 sec| -15.892 | 0.917    | Real-world machine logs                     |
| Nature       | Phoneme                         | 2.16M       | 25M       | -      | -8.506  | 0.243    | [3]                                       |
| Nature | EigenWorms                      | 27.95M      | 252M      | -      | -12.201 | 0.393    | [3]                                       |
| Nature | ERA5 Surface                    | 58.44M      | 574M      | 3 h    | -28.263 | 0.493    | [4]                                       |
| Nature | ERA5 Pressure                   | 116.88M     | 1083M     | 3h     | -22.001 | 0.853    | [4]                                       |
| Nature | Temperature Rain                | 23.25M      | 109M      | Daily  | -10.952 | 0.133    | [1]                                       |
| Nature | StarLightCurves                 | 9.46M       | 109M      | -      | -1.891  | 0.555    | [3]                                       |
| Nature | Saugen River Flow               | 0.02M       | 1M        | Daily  | -19.305 | 0.300    | [1]                                       |
| Nature | KDD Cup 2018                   | 2.94M       | 67M       | Hourly | -10.107 | 0.362    | [1]                                       |
| Nature | US Births                      | 0.00M       | 1M        | Daily  | -3.352  | 0.675    | [1]                                       |
| Nature | Sunspot                         | 0.07M       | 2M        | Daily  | -7.866  | 0.287    | [1]                                       |
| Nature | Worms                           | 0.23M       | 4M        | 0.033 sec| -3.851 | 0.395    | [3]                                       |
| Transport    | Pedestrian Counts               | 3.13M       | 72M       | Hourly | -23.462 | 0.297    | [1]                                       |
| Web          | Web Traffic                     | 116.49M     | 388M      | Daily  | -8.272  | 0.299    | [1]                                       |

You can find the specific source address in `source.csv`.

[1]:  [Monash Time Series Forecasting Archive](https://arxiv.org/abs/2105.06643)
[2]:  [Time series extrinsic regression Predicting numeric values from time series data](https://link.springer.com/article/10.1007/s10618-021-00745-9)
[3]:  [The UCR Time Series Archive](https://arxiv.org/abs/1810.07758) 
[4]:  [ERA5-Land: a state-of-the-art global reanalysis dataset for land applications](https://essd.copernicus.org/articles/13/4349/2021/)
[5]:  [Contrast Everything: A Hierarchical Contrastive Framework for Medical Time-Series](https://arxiv.org/abs/2310.14017)


##  Hierarchy of Datasets

UTSD is constructed with hierarchical capacities, namely **UTSD-1G, UTSD-2G, UTSD-4G, and UTSD-12G**, where each smaller dataset is a subset of the larger ones. A larger subset means greater data **difficulty** and **diversity**, allowing you to conduct detailed scaling experiments.

<p align="center">
<img src="./figures/utsd_complexity.png" alt="" align=center />
</p>

##  Usage

You can load UTSD according to the following code:

```python
import datasets

# Load UTSD dataset
UTSD_12G = datasets.load_from_disk('UTSD-12G')
print(UTSD_12G)
for item in UTSD_12G:
    print(item.keys(), 'len of target:', len(item['target']))
```

It should be noted that due to the construction of our dataset with diverse lengths, the sequence lengths of different samples vary. You can construct the data organization logic according to your own needs.

In addition, we provide code `dataset_evaluation.py` for evaluating time series datasets, which you can use to evaluate your Huggingface formatted dataset. The usage of this script is as follows:

```bash
python dataset_evaluation.py --root_path <dataset root path> --log_path <output log path>
```

## Acknowledgments

UTSD is mostly built from the Internet public time series dataset, which comes from different research teams and providers. We sincerely thank all individuals and organizations who have contributed the data. Without their generous sharing, this dataset would not have existed.

## Citation


If you're using UTSD in your research or applications, please cite it using this BibTeX:

**BibTeX:**

```markdown
@article{liu2024timer,
  title={Timer: Transformers for Time Series Analysis at Scale},
  author={Liu, Yong and Zhang, Haoran and Li, Chenyu and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
  journal={arXiv preprint arXiv:2402.02368},
  year={2024}
}
```