itsliupeng
commited on
Commit
•
6a8fdb6
1
Parent(s):
24d98f3
Update README.md
Browse files
README.md
CHANGED
@@ -13,4 +13,8 @@ A Reproduction of [OpenLLaMA](https://github.com/openlm-research/open_llama) usi
|
|
13 |
The pretrain data consists of Falcon, Starcoder, and the wikipedia, arxiv, books, stackexchange from RedPajama. In total, this encompassed nearly 1 trillion tokens.
|
14 |
|
15 |
|
16 |
-
The model was trained over a single epoch, incorporating 2000 warm-up steps and a cosine learning rate schedule, starting at 3e-5 with 4M batch size.
|
|
|
|
|
|
|
|
|
|
13 |
The pretrain data consists of Falcon, Starcoder, and the wikipedia, arxiv, books, stackexchange from RedPajama. In total, this encompassed nearly 1 trillion tokens.
|
14 |
|
15 |
|
16 |
+
The model was trained over a single epoch, incorporating 2000 warm-up steps and a cosine learning rate schedule, starting at 3e-5 with 4M batch size.
|
17 |
+
|
18 |
+
|
19 |
+
|
20 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/643fb889b9ba82afb66d6b36/7nSjTJNB7qjwIa74Rr6kd.png)
|