idevede commited on
Commit
db26a28
β€’
1 Parent(s): 69ee76b

Add the demos

Browse files
Files changed (2) hide show
  1. README.md +133 -20
  2. pics/TEMPO_logo.png +0 -0
README.md CHANGED
@@ -16,20 +16,75 @@ tags:
16
  ---
17
  # TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting
18
 
19
- The official code for ICLR 2024 paper: "TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting (ICLR 2024)".
 
 
 
 
 
 
 
 
20
 
21
  TEMPO is one of the very first open source **Time Series Foundation Models** for forecasting task v1.0 version.
22
 
23
- ![TEMPO-architecture](pics/TEMPO.png)
24
 
 
25
 
 
26
 
27
- Please try our foundation model demo [[here]](https://4171a8a7484b3e9148.gradio.live).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
- ![TEMPO-demo](pics/TEMPO_demo.jpg)
30
 
 
31
 
32
- # Build the environment
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  ```
35
  conda create -n tempo python=3.8
@@ -38,50 +93,93 @@ conda create -n tempo python=3.8
38
  conda activate tempo
39
  ```
40
  ```
 
 
 
41
  pip install -r requirements.txt
42
  ```
43
 
44
- # Get Data
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
  Download the data from [[Google Drive]](https://drive.google.com/drive/folders/13Cg1KYOlzM5C7K8gK8NfC-F3EYxkM3D2?usp=sharing) or [[Baidu Drive]](https://pan.baidu.com/s/1r3KhGd0Q9PJIUZdfEYoymg?pwd=i9iy), and place the downloaded data in the folder`./dataset`. You can also download the STL results from [[Google Drive]](https://drive.google.com/file/d/1gWliIGDDSi2itUAvYaRgACru18j753Kw/view?usp=sharing), and place the downloaded data in the folder`./stl`.
47
 
48
- # Run TEMPO
49
 
50
- ## Training Stage
51
  ```
52
  bash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather].sh
53
  ```
54
 
55
- ## Test
56
 
57
  After training, we can test TEMPO model under the zero-shot setting:
58
 
59
  ```
60
  bash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather]_test.sh
61
  ```
62
- ![TEMPO-results](pics/results.jpg)
 
63
 
64
 
65
- # Pre-trained Models
66
 
67
  You can download the pre-trained model from [[Google Drive]](https://drive.google.com/file/d/11Ho_seP9NGh-lQCyBkvQhAQFy_3XVwKp/view?usp=drive_link) and then run the test script for fun.
68
 
69
- # Multi-modality dataset: TETS dataset
70
 
71
  Here is the prompts use to generate the coresponding textual informaton of time series via [[OPENAI ChatGPT-3.5 API]](https://platform.openai.com/docs/guides/text-generation)
72
 
73
- ![TEMPO-prompt](pics/TETS_prompt.png)
74
-
75
 
76
  The time series data are come from [[S&P 500]](https://www.spglobal.com/spdji/en/indices/equity/sp-500/#overview). Here is the EBITDA case for one company from the dataset:
77
 
78
- ![Company1_ebitda_summary](pics/Company1_ebitda_summary.png)
79
 
 
80
 
81
  Example of generated contextual information for the Company marked above:
82
 
83
- ![Company1_ebitda_summary_words.jpg](pics/Company1_ebitda_summary_words.jpg)
84
-
85
 
86
 
87
 
@@ -89,9 +187,10 @@ Example of generated contextual information for the Company marked above:
89
  You can download the processed data with text embedding from GPT2 from: [[TETS]](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link
90
  ).
91
 
 
 
92
 
93
-
94
- ## Cite
95
  ```
96
  @inproceedings{
97
  cao2024tempo,
@@ -101,4 +200,18 @@ booktitle={The Twelfth International Conference on Learning Representations},
101
  year={2024},
102
  url={https://openreview.net/forum?id=YH5w12OUuU}
103
  }
104
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  ---
17
  # TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting
18
 
19
+ [![preprint](https://img.shields.io/static/v1?label=arXiv&message=2310.04948&color=B31B1B&logo=arXiv)](https://arxiv.org/pdf/2310.04948)
20
+ [![huggingface](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-FFD21E)](https://huggingface.co/Melady/TEMPO)
21
+ [![License: MIT](https://img.shields.io/badge/License-Apache--2.0-green.svg)](https://opensource.org/licenses/Apache-2.0)
22
+
23
+ <div align="center"><img src=./pics/TEMPO_logo.png width=60% /></div>
24
+
25
+ The official model card for ICLR 2024 paper: "TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting (ICLR 2024)".
26
+
27
+ The official code for [["TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting (ICLR 2024)"]](https://arxiv.org/pdf/2310.04948).
28
 
29
  TEMPO is one of the very first open source **Time Series Foundation Models** for forecasting task v1.0 version.
30
 
31
+ <div align="center"><img src=./pics/TEMPO.png width=80% /></div>
32
 
33
+ ## πŸ’‘ Demos
34
 
35
+ ### 1. Reproducing zero-shot experiments on ETTh2:
36
 
37
+ Please try to reproduc the zero-shot experiments on ETTh2 [[here on Colab]](https://colab.research.google.com/drive/11qGpT7H1JMaTlMlm9WtHFZ3_cJz7p-og?usp=sharing).
38
+
39
+ ### 2. Zero-shot experiments on customer dataset:
40
+
41
+ We use the following Colab page to show the demo of building the customer dataset and directly do the inference via our pre-trained foundation model: [[Colab]](https://colab.research.google.com/drive/1ZpWbK0L6mq1pav2yDqOuORo4rHbv80-A?usp=sharing)
42
+
43
+
44
+ ## ⏳ Upcoming Features
45
+
46
+ - [βœ…] Parallel pre-training pipeline
47
+ - [] Probabilistic forecasting
48
+ - [] Multimodal dataset
49
+ - [] Multimodal pre-training script
50
+
51
+ ## πŸš€ News
52
+
53
+
54
+ - **Oct 2024**: πŸš€ We've streamlined our code structure, enabling users to download the pre-trained model and perform zero-shot inference with a single line of code! Check out our [demo](./run_TEMPO_demo.py) for more details. Our model's download count on HuggingFace is now trackable!
55
+
56
+ - **Jun 2024**: πŸš€ We added demos for reproducing zero-shot experiments in [Colab](https://colab.research.google.com/drive/11qGpT7H1JMaTlMlm9WtHFZ3_cJz7p-og?usp=sharing). We also added the demo of building the customer dataset and directly do the inference via our pre-trained foundation model: [Colab](https://colab.research.google.com/drive/1ZpWbK0L6mq1pav2yDqOuORo4rHbv80-A?usp=sharing)
57
+ - **May 2024**: πŸš€ TEMPO has launched a GUI-based online [demo](https://4171a8a7484b3e9148.gradio.live/), allowing users to directly interact with our foundation model!
58
+ - **May 2024**: πŸš€ TEMPO published the 80M pretrained foundation model in [HuggingFace](https://huggingface.co/Melady/TEMPO)!
59
+ - **May 2024**: πŸ§ͺ We added the code for pretraining and inference TEMPO models. You can find a pre-training script demo in [this folder](./scripts/etth2.sh). We also added [a script](./scripts/etth2_test.sh) for the inference demo.
60
+
61
+ - **Mar 2024**: πŸ“ˆ Released [TETS dataset](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link) from [S&P 500](https://www.spglobal.com/spdji/en/indices/equity/sp-500/#overview) used in multimodal experiments in TEMPO.
62
+ - **Mar 2024**: πŸ§ͺ TEMPO published the project [code](https://github.com/DC-research/TEMPO) and the pre-trained checkpoint [online](https://drive.google.com/file/d/11Ho_seP9NGh-lQCyBkvQhAQFy_3XVwKp/view?usp=drive_link)!
63
+ - **Jan 2024**: πŸš€ TEMPO [paper](https://openreview.net/pdf?id=YH5w12OUuU) get accepted by ICLR!
64
+ - **Oct 2023**: πŸš€ TEMPO [paper](https://arxiv.org/pdf/2310.04948) released on Arxiv!
65
 
 
66
 
67
+ # Practice
68
 
69
+ ## Download the repo
70
+
71
+ ```
72
+ git clone git@github.com:DC-research/TEMPO.git
73
+ ```
74
+
75
+ ## [Optional] Download the model and config file via commands
76
+ ```
77
+ huggingface-cli download Melady/TEMPO config.json --local-dir ./TEMPO/TEMPO_checkpoints
78
+ ```
79
+ ```
80
+ huggingface-cli download Melady/TEMPO TEMPO-80M_v2.pth --local-dir ./TEMPO/TEMPO_checkpoints
81
+ ```
82
+
83
+ ```
84
+ !huggingface-cli download Melady/TEMPO TEMPO-80M_v1.pth --local-dir ./TEMPO/TEMPO_checkpoints
85
+ ```
86
+
87
+ ## Build the environment
88
 
89
  ```
90
  conda create -n tempo python=3.8
 
93
  conda activate tempo
94
  ```
95
  ```
96
+ cd TEMPO
97
+ ```
98
+ ```
99
  pip install -r requirements.txt
100
  ```
101
 
102
+ ## Script Demo
103
+
104
+ A streamlining example showing how to perform forecasting using TEMPO:
105
+
106
+ ```python
107
+ # Third-party library imports
108
+ import numpy as np
109
+ import torch
110
+ from numpy.random import choice
111
+ # Local imports
112
+ from models.TEMPO import TEMPO
113
+
114
+
115
+ model = TEMPO.load_pretrained_model(
116
+ device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu'),
117
+ repo_id = "Melady/TEMPO",
118
+ filename = "TEMPO-80M_v1.pth",
119
+ cache_dir = "./checkpoints/TEMPO_checkpoints"
120
+ )
121
+
122
+ input_data = np.random.rand(336) # Random input data
123
+ with torch.no_grad():
124
+ predicted_values = model.predict(input_data, pred_length=96)
125
+ print("Predicted values:")
126
+ print(predicted_values)
127
+
128
+ ```
129
+
130
+
131
+ ## Online demo:
132
+
133
+ Please try our foundation model demo [[here]](https://4171a8a7484b3e9148.gradio.live).
134
+
135
+ <div align="center"><img src=./pics/TEMPO_demo.jpg width=80% /></div>
136
+
137
+ ## Practice on your end
138
+
139
+ We also updated our models on HuggingFace: [[Melady/TEMPO]](https://huggingface.co/Melady/TEMPO).
140
+
141
+
142
+
143
+ ### Get Data
144
 
145
  Download the data from [[Google Drive]](https://drive.google.com/drive/folders/13Cg1KYOlzM5C7K8gK8NfC-F3EYxkM3D2?usp=sharing) or [[Baidu Drive]](https://pan.baidu.com/s/1r3KhGd0Q9PJIUZdfEYoymg?pwd=i9iy), and place the downloaded data in the folder`./dataset`. You can also download the STL results from [[Google Drive]](https://drive.google.com/file/d/1gWliIGDDSi2itUAvYaRgACru18j753Kw/view?usp=sharing), and place the downloaded data in the folder`./stl`.
146
 
147
+ ### Run TEMPO
148
 
149
+ ### Pre-Training Stage
150
  ```
151
  bash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather].sh
152
  ```
153
 
154
+ ### Test/ Inference Stage
155
 
156
  After training, we can test TEMPO model under the zero-shot setting:
157
 
158
  ```
159
  bash [ecl, etth1, etth2, ettm1, ettm2, traffic, weather]_test.sh
160
  ```
161
+
162
+ <div align="center"><img src=./pics/results.jpg width=90% /></div>
163
 
164
 
165
+ ## Pre-trained Models
166
 
167
  You can download the pre-trained model from [[Google Drive]](https://drive.google.com/file/d/11Ho_seP9NGh-lQCyBkvQhAQFy_3XVwKp/view?usp=drive_link) and then run the test script for fun.
168
 
169
+ ## TETS dataset
170
 
171
  Here is the prompts use to generate the coresponding textual informaton of time series via [[OPENAI ChatGPT-3.5 API]](https://platform.openai.com/docs/guides/text-generation)
172
 
173
+ <div align="center"><img src=./pics/TETS_prompt.png width=80% /></div>
 
174
 
175
  The time series data are come from [[S&P 500]](https://www.spglobal.com/spdji/en/indices/equity/sp-500/#overview). Here is the EBITDA case for one company from the dataset:
176
 
 
177
 
178
+ <div align="center"><img src=./pics/Company1_ebitda_summary.png width=80% /></div>
179
 
180
  Example of generated contextual information for the Company marked above:
181
 
182
+ <div align="center"><img src=./pics/Company1_ebitda_summary_words.jpg width=80% /></div>
 
183
 
184
 
185
 
 
187
  You can download the processed data with text embedding from GPT2 from: [[TETS]](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link
188
  ).
189
 
190
+ ## Contact
191
+ Feel free to connect DefuCao@USC.EDU / YanLiu.CS@USC.EDU if you’re interested in applying TEMPO to your real-world application.
192
 
193
+ ## Cite our work
 
194
  ```
195
  @inproceedings{
196
  cao2024tempo,
 
200
  year={2024},
201
  url={https://openreview.net/forum?id=YH5w12OUuU}
202
  }
203
+ ```
204
+
205
+ ```
206
+ @article{
207
+ Jia_Wang_Zheng_Cao_Liu_2024,
208
+ title={GPT4MTS: Prompt-based Large Language Model for Multimodal Time-series Forecasting},
209
+ volume={38},
210
+ url={https://ojs.aaai.org/index.php/AAAI/article/view/30383},
211
+ DOI={10.1609/aaai.v38i21.30383},
212
+ number={21},
213
+ journal={Proceedings of the AAAI Conference on Artificial Intelligence},
214
+ author={Jia, Furong and Wang, Kevin and Zheng, Yixiang and Cao, Defu and Liu, Yan},
215
+ year={2024}, month={Mar.}, pages={23343-23351}
216
+ }
217
+ ```
pics/TEMPO_logo.png ADDED