Update README.md
Browse files
README.md
CHANGED
@@ -118,7 +118,7 @@ The models have been pre-trained using a blend of the following datasets.
|
|
118 |
|Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B
|
119 |
|
120 |
The pre-training was continuously conducted using a total of 10 folds of non-overlapping data, each consisting of approximately 27-28B tokens.
|
121 |
-
We finalized the pre-training with additional (potentially) high-quality 27B tokens data obtained from the identical source
|
122 |
|
123 |
### Instruction tuning
|
124 |
|
|
|
118 |
|Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B
|
119 |
|
120 |
The pre-training was continuously conducted using a total of 10 folds of non-overlapping data, each consisting of approximately 27-28B tokens.
|
121 |
+
We finalized the pre-training with additional (potentially) high-quality 27B tokens data obtained from the identical source datasets listed above used for the 10-fold data.
|
122 |
|
123 |
### Instruction tuning
|
124 |
|