samsja commited on
Commit
9c49fb6
·
verified ·
1 Parent(s): 96b5eaf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -3
README.md CHANGED
@@ -6,6 +6,17 @@ datasets:
6
  - PrimeIntellect/StackV1-popular
7
  - mlfoundations/dclm-baseline-1.0-parquet
8
  - open-web-math/open-web-math
 
 
 
 
 
 
 
 
 
 
 
9
  language:
10
  - en
11
  pipeline_tag: text-generation
@@ -17,6 +28,9 @@ pipeline_tag: text-generation
17
 
18
  ![Intellect 1 training visual](intellect-1-map.png)
19
 
 
 
 
20
  **INTELLECT-1** was trained on up to 14 concurrent nodes distributed across 3 continents, with contributions from 30 independent community contributors providing compute.
21
  The training code utilizes the [prime framework](https://github.com/PrimeIntellect-ai/prime), a scalable distributed training framework designed for fault-tolerant, dynamically scaling, high-perfomance training on unreliable, globally distributed workers.
22
  The key abstraction that allows dynamic scaling is the `ElasticDeviceMesh` which manages dynamic global process groups for fault-tolerant communication across the internet and local process groups for communication within a node.
@@ -24,7 +38,7 @@ The model was trained using the [DiLoCo](https://arxiv.org/abs/2311.08105) algor
24
 
25
  For more detailed technical insights, please refer to our [technical paper](https://github.com/PrimeIntellect-ai/prime).
26
 
27
- **Note: The model will immediately output EOS token if the BOS token is not set. This is a result of the tensor packing used during training. This can result in terrible eval scores.**
28
 
29
  ## Usage
30
  ```python
@@ -54,7 +68,7 @@ print(pipe("What is prime intellect ?"))
54
  ```
55
 
56
  ## **Model Details**
57
- - **Model Contributors**: samsja, Prime Intellect, Arcee AI, kotaro, skre_0, marlo, rodeo, Herb, Olas, superchillen, Hugging Face, mev_pete, 0xfr_, dj, primeprimeint1234, Marco Giglio, realtek, Hyperbolic, hecataeus, NWO, Virtual Machine, droll, SemiAnalysis, _waiting__, toptickcrypto, sto, Johannes, washout_segment_0b, klee
58
  - **Release Date**: 29 Nov 2024
59
  - **Model License**: Apache 2.0
60
 
@@ -73,6 +87,45 @@ print(pipe("What is prime intellect ?"))
73
  - **Tokens**: 1 Trillion
74
  - **Optimizer**: Diloco/LocalSGD - Inner Optimizer: AdamW, Outer Optmizer: Nesterov SGD
75
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76
  **Performance on benchmarks**
77
 
78
  | Model | Size | Tokens | MMLU | GPQA | GSM8K | ARC-C | Hellaswag |
@@ -88,5 +141,10 @@ print(pipe("What is prime intellect ?"))
88
  ## **Citations**
89
  If you use this model in your research, please cite it as follows:
90
  ```
91
- @article{}
 
 
 
 
 
92
  ```
 
6
  - PrimeIntellect/StackV1-popular
7
  - mlfoundations/dclm-baseline-1.0-parquet
8
  - open-web-math/open-web-math
9
+ - MaziyarPanahi/open-perfectblend-fixed
10
+ - mlabonne/orca-agentinstruct-1M-v1-cleaned
11
+ - Post-training-Data-Flywheel/AutoIF-instruct-61k
12
+ - Team-ACE/ToolACE
13
+ - MaziyarPanahi/Synthia-Coder-v1.5-I-sharegpt
14
+ - ServiceNow-AI/M2Lingual
15
+ - AI-MO/NuminaMath-TIR
16
+ - allenai/tulu-3-sft-personas-code
17
+ - tulu-3-sft-personas-math
18
+ - tulu-3-sft-personas-math-grade
19
+ - tulu-3-sft-personas-algebra
20
  language:
21
  - en
22
  pipeline_tag: text-generation
 
28
 
29
  ![Intellect 1 training visual](intellect-1-map.png)
30
 
31
+ This is an instruct model. The base model associated with it is [INTELLECT-1](https://huggingface.co/PrimeIntellect/INTELLECT-1).
32
+
33
+
34
  **INTELLECT-1** was trained on up to 14 concurrent nodes distributed across 3 continents, with contributions from 30 independent community contributors providing compute.
35
  The training code utilizes the [prime framework](https://github.com/PrimeIntellect-ai/prime), a scalable distributed training framework designed for fault-tolerant, dynamically scaling, high-perfomance training on unreliable, globally distributed workers.
36
  The key abstraction that allows dynamic scaling is the `ElasticDeviceMesh` which manages dynamic global process groups for fault-tolerant communication across the internet and local process groups for communication within a node.
 
38
 
39
  For more detailed technical insights, please refer to our [technical paper](https://github.com/PrimeIntellect-ai/prime).
40
 
41
+ **Note: You must add a BOS token at the beginning of each sample. Performance may be impacted otherwise.**
42
 
43
  ## Usage
44
  ```python
 
68
  ```
69
 
70
  ## **Model Details**
71
+ - **Compute Contributors**: Prime Intellect, Arcee AI, kotaro, skre_0, marlo, rodeo, Herb, Olas, superchillen, Hugging Face, mev_pete, 0xfr_, dj, primeprimeint1234, Marco Giglio, realtek, Hyperbolic, hecataeus, NWO, Virtual Machine, droll, SemiAnalysis, _waiting__, toptickcrypto, sto, Johannes, washout_segment_0b, klee
72
  - **Release Date**: 29 Nov 2024
73
  - **Model License**: Apache 2.0
74
 
 
87
  - **Tokens**: 1 Trillion
88
  - **Optimizer**: Diloco/LocalSGD - Inner Optimizer: AdamW, Outer Optmizer: Nesterov SGD
89
 
90
+
91
+
92
+ ## Post-training
93
+
94
+ The post-training has been handled by [arcee](https://huggingface.co/arcee-ai)
95
+
96
+ After completing the globally distributed pretraining phase, we applied several post-training techniques to enhance INTELLECT-1's capabilities and task-specific performance. Our post-training methodology consisted of three main phases.
97
+
98
+ First, we conducted an extensive series of 16 Supervised Fine-Tuning (SFT) trainings, with individual runs ranging from 1 to 3.3 billion tokens each. The most successful configuration used 2.4 billion training tokens over 3 epochs. We used MergeKit, EvolKit, and DistillKit from Arcee AI to combine the models, generate the data sets, and distill the logits, respectively. For training data, we used a diverse set of high-quality datasets:
99
+
100
+ 1. **New Datasets** (released with INTELLECT-1):
101
+ - arcee-ai/EvolKit-75k (generated via EvolKit)
102
+ - arcee-ai/Llama-405B-Logits
103
+ - arcee-ai/The-Tomb
104
+
105
+ 2. **Instruction Following**:
106
+ - [mlabonne/open-perfectblend-fixed](https://huggingface.co/datasets/MaziyarPanahi/open-perfectblend-fixed) (generalist capabilities)
107
+ - [microsoft/orca-agentinstruct-1M-v1-cleaned](https://huggingface.co/datasets/mlabonne/orca-agentinstruct-1M-v1-cleaned) (Chain-of-Thought)
108
+ - [Post-training-Data-Flywheel/AutoIF-instruct-61k-with-funcs](https://huggingface.co/datasets/Post-training-Data-Flywheel/AutoIF-instruct-61k)
109
+
110
+ 3. **Domain-Specific**:
111
+ - [Team-ACE/ToolACE](https://huggingface.co/datasets/Team-ACE/ToolACE) (function calling)
112
+ - [Synthia coder](https://huggingface.co/datasets/MaziyarPanahi/Synthia-Coder-v1.5-I-sharegpt) (programming)
113
+ - [ServiceNow-AI/M2Lingual](https://huggingface.co/datasets/ServiceNow-AI/M2Lingual) (multilingual)
114
+ - [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) (mathematics)
115
+
116
+ 4. **Tulu-3 Persona Datasets**:
117
+ - [allenai/tulu-3-sft-personas-code](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-code)
118
+ - [allenai/tulu-3-sft-personas-math](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-math)
119
+ - [allenai/tulu-3-sft-personas-math-grade](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-math-grade)
120
+ - [allenai/tulu-3-sft-personas-algebra](https://huggingface.co/datasets/allenai/tulu-3-sft-personas-algebra)
121
+
122
+ Second, we execute 8 distinct Direct Preference Optimization (DPO) runs with various combinations of data sets to enhance specific performance metrics and align the model with human preferences. A key advantage in our post-training process was INTELLECT-1's use of the Llama-3 tokenizer, which allowed us to utilize logits from Llama-3.1-405B to heal and maintain precision during the post-training process via DistillKit.
123
+
124
+ Finally, we performed 16 strategic merges between candidate models using MergeKit to create superior combined models that leverage the strengths of different training runs. During the post-training phase, we observed that when using a ChatML template without an explicit BOS (begin-of-sequence) token, the initial loss was approximately 15. However, when switching to the Llama 3.1 chat template, the loss for these trainings started much lower at approximately 1.1, indicating better alignment with the underlying Llama 3 tokenizer.
125
+
126
+ The combination of these post-training techniques resulted in significant improvements in various benchmarks, particularly in knowledge retrieval, grade school math, instruction following and reasoning.
127
+
128
+
129
  **Performance on benchmarks**
130
 
131
  | Model | Size | Tokens | MMLU | GPQA | GSM8K | ARC-C | Hellaswag |
 
141
  ## **Citations**
142
  If you use this model in your research, please cite it as follows:
143
  ```
144
+ @article{jaghouar2024intellect,
145
+ title={INTELLECT-1 Technical Report.},
146
+ author={Jaghouar, Sami and Ong, Jack Min and Basra, Manveer and Obeid, Fares and Straube, Jannik and Keiblinger, Michael and Bakouch, Elie and Atkins, Lucas and Panahi, Maziyar and Goddard, Charles and Ryabinin, Max and Hagemann, Johannes},
147
+ journal={arXiv preprint},
148
+ year={2024}
149
+ }
150
  ```