arjunashok
commited on
Commit
•
f439182
1
Parent(s):
f936da5
Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,13 @@ tags:
|
|
16 |
|
17 |
Lag-Llama is the <b>first open-source foundation model for time series forecasting</b>!
|
18 |
|
19 |
-
[[Tweet Thread](https://twitter.com/arjunashok37/status/1755261111233114165)]
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
____
|
22 |
This HuggingFace model houses the <a href="https://huggingface.co/time-series-foundation-models/Lag-Llama/blob/main/lag-llama.ckpt" target="_blank">pretrained checkpoint</a> of Lag-Llama.
|
@@ -25,6 +31,7 @@ ____
|
|
25 |
|
26 |
<b>Updates</b>:
|
27 |
|
|
|
28 |
* **5-Apr-2024**: Added a [section](https://colab.research.google.com/drive/1DRAzLUPxsd-0r8b-o4nlyFXrjw_ZajJJ?authuser=1#scrollTo=Mj9LXMpJ01d7&line=6&uniqifier=1) in Colab Demo 1 on the importance of tuning the context length for zero-shot forecasting. Added a [best practices section](https://github.com/time-series-foundation-models/lag-llama?tab=readme-ov-file#best-practices) in the README; added recommendations for finetuning. These recommendations will be demonstrated with an example in [Colab Demo 2](https://colab.research.google.com/drive/1uvTmh-pe1zO5TeaaRVDdoEWJ5dFDI-pA?usp=sharing) soon.
|
29 |
* **4-Apr-2024**: We have updated our requirements file with new versions of certain packages. Please update/recreate your environments if you have previously used the code locally.
|
30 |
* **7-Mar-2024**: We have released a preliminary [Colab Demo 2](https://colab.research.google.com/drive/1uvTmh-pe1zO5TeaaRVDdoEWJ5dFDI-pA?usp=sharing) for finetuning. Please note this is a preliminary tutorial. We recommend taking a look at the best practices if you are finetuning the model or using it for benchmarking.
|
|
|
16 |
|
17 |
Lag-Llama is the <b>first open-source foundation model for time series forecasting</b>!
|
18 |
|
19 |
+
[[Tweet Thread](https://twitter.com/arjunashok37/status/1755261111233114165)]
|
20 |
+
|
21 |
+
[[Code](https://github.com/time-series-foundation-models/lag-llama)] [[Model Weights](https://huggingface.co/time-series-foundation-models/Lag-Llama)] [[Colab Demo 1: Zero-Shot Forecasting](https://colab.research.google.com/drive/1DRAzLUPxsd-0r8b-o4nlyFXrjw_ZajJJ?usp=sharing)] [[Colab Demo 2: (Preliminary Finetuning)](https://colab.research.google.com/drive/1uvTmh-pe1zO5TeaaRVDdoEWJ5dFDI-pA?usp=sharing)]
|
22 |
+
|
23 |
+
[[Paper](https://arxiv.org/abs/2310.08278)]
|
24 |
+
|
25 |
+
[[Video](https://www.youtube.com/watch?v=Mf2FOzDPxck)]
|
26 |
|
27 |
____
|
28 |
This HuggingFace model houses the <a href="https://huggingface.co/time-series-foundation-models/Lag-Llama/blob/main/lag-llama.ckpt" target="_blank">pretrained checkpoint</a> of Lag-Llama.
|
|
|
31 |
|
32 |
<b>Updates</b>:
|
33 |
|
34 |
+
* **9-Apr-2024**: We have released a 15-minute video 🎥 on Lag-Llama on [YouTube](https://www.youtube.com/watch?v=Mf2FOzDPxck).
|
35 |
* **5-Apr-2024**: Added a [section](https://colab.research.google.com/drive/1DRAzLUPxsd-0r8b-o4nlyFXrjw_ZajJJ?authuser=1#scrollTo=Mj9LXMpJ01d7&line=6&uniqifier=1) in Colab Demo 1 on the importance of tuning the context length for zero-shot forecasting. Added a [best practices section](https://github.com/time-series-foundation-models/lag-llama?tab=readme-ov-file#best-practices) in the README; added recommendations for finetuning. These recommendations will be demonstrated with an example in [Colab Demo 2](https://colab.research.google.com/drive/1uvTmh-pe1zO5TeaaRVDdoEWJ5dFDI-pA?usp=sharing) soon.
|
36 |
* **4-Apr-2024**: We have updated our requirements file with new versions of certain packages. Please update/recreate your environments if you have previously used the code locally.
|
37 |
* **7-Mar-2024**: We have released a preliminary [Colab Demo 2](https://colab.research.google.com/drive/1uvTmh-pe1zO5TeaaRVDdoEWJ5dFDI-pA?usp=sharing) for finetuning. Please note this is a preliminary tutorial. We recommend taking a look at the best practices if you are finetuning the model or using it for benchmarking.
|