Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- PrimeIntellect/fineweb-edu
|
5 |
+
- PrimeIntellect/fineweb
|
6 |
+
- PrimeIntellect/StackV1-popular
|
7 |
+
- mlfoundations/dclm-baseline-1.0-parquet
|
8 |
+
- open-web-math/open-web-math
|
9 |
+
language:
|
10 |
+
- en
|
11 |
+
pipeline_tag: text-generation
|
12 |
+
---
|
13 |
+
# INTELLECT-1
|
14 |
+
|
15 |
+
## **Model Overview**
|
16 |
+
**INTELLECT-1** is the first collaboratively trained 10 billion parameter language model trained from scratch on 1 trillion tokens of English text and code.
|
17 |
+
|
18 |
+
![Intellect 1 training visual](intellect-1-map.png)
|
19 |
+
|
20 |
+
**INTELLECT-1** was trained on up to 14 concurrent nodes distributed across 3 continents, with contributions from 30 independent community contributors providing compute.
|
21 |
+
The training code utilizes the [prime framework](https://github.com/PrimeIntellect-ai/prime), a scalable distributed training framework designed for fault-tolerant, dynamically scaling, high-perfomance training on unreliable, globally distributed workers.
|
22 |
+
The key abstraction that allows dynamic scaling is the `ElasticDeviceMesh` which manages dynamic global process groups for fault-tolerant communication across the internet and local process groups for communication within a node
|
23 |
+
The global all-reduce was done with custom int8 all-reduce kernels to reduce the communication payload required, greatly reducing the communication overhead.
|
24 |
+
|
25 |
+
For more detailed technical insights, please refer to our [technical paper](https://github.com/PrimeIntellect-ai/prime).
|
26 |
+
|
27 |
+
**Note: The model will immediately output EOS token if the BOS token is not set. This is a result of the tensor packing used during training. This can result in terrible eval scores.**
|
28 |
+
|
29 |
+
## Usage
|
30 |
+
```python
|
31 |
+
import torch
|
32 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
33 |
+
|
34 |
+
torch.set_default_device("cuda")
|
35 |
+
model = AutoModelForCausalLM.from_pretrained("PrimeIntellect/INTELLECT-1-Instruct")
|
36 |
+
tokenizer = AutoTokenizer.from_pretrained("PrimeIntellect/INTELLECT-1-Instruct")
|
37 |
+
|
38 |
+
input_text = "What is the Metamorphosis of Prime Intellect about?"
|
39 |
+
input_ids = tokenizer.encode(input_text, return_tensors="pt")
|
40 |
+
output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1)
|
41 |
+
output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
|
42 |
+
|
43 |
+
print(output_text)
|
44 |
+
```
|
45 |
+
|
46 |
+
### Example text generation pipeline
|
47 |
+
```python
|
48 |
+
import torch
|
49 |
+
from transformers import pipeline
|
50 |
+
torch.set_default_device("cuda")
|
51 |
+
|
52 |
+
pipe = pipeline("text-generation", model="PrimeIntellect/INTELLECT-1")
|
53 |
+
print(pipe("What is prime intellect ?"))
|
54 |
+
```
|
55 |
+
|
56 |
+
## **Model Details**
|
57 |
+
- **Model Contributors**: samsja, Prime Intellect, Arcee AI, kotaro, skre_0, marlo, rodeo, Herb, Olas, superchillen, Hugging Face, mev_pete, 0xfr_, dj, primeprimeint1234, Marco Giglio, realtek, Hyperbolic, hecataeus, NWO, Virtual Machine, droll, SemiAnalysis, _waiting__, toptickcrypto, sto, Johannes, washout_segment_0b, klee
|
58 |
+
- **Release Date**: 29 Nov 2024
|
59 |
+
- **Model License**: Apache 2.0
|
60 |
+
|
61 |
+
## **Technical Specifications**
|
62 |
+
| **Parameter** | **Value** |
|
63 |
+
|----------------------|------------------------|
|
64 |
+
| Parameter Size | 10B |
|
65 |
+
| Number of Layers | 42 |
|
66 |
+
| Number of Attention Heads | 32 |
|
67 |
+
| Hidden Size | 4096 |
|
68 |
+
| Context Length | 8192 |
|
69 |
+
| Vocabulary Size | 128256 |
|
70 |
+
|
71 |
+
**Training Details**:
|
72 |
+
- **Dataset**: 55% fineweb-edu, 10% fineweb, 20% Stack V1, 10% dclm-baseline, 5% open-web-math
|
73 |
+
- **Tokens**: 1 Trillion
|
74 |
+
- **Optimizer**: Diloco/LocalSGD - Inner Optimizer: AdamW, Outer Optmizer: Nesterov SGD
|
75 |
+
|
76 |
+
**Performance on benchmarks**
|
77 |
+
| Model | Size | Tokens | MMLU | GPQA | GSM8K | ARC-C | Hellaswag |
|
78 |
+
|---|---|---|---|---|---|---|---|
|
79 |
+
| INTELLECT-1 | 10B | 1T | 37.5 | 26.12 | 8.1 | 52.13 | 72.26 |
|
80 |
+
| LLaMA-7B | 7B | 1T | 35.1 | 23.1 | 9.7 | 50.43 | 78.19 |
|
81 |
+
| LLaMA-13B | 13B | 1T | 46.9 | 26.34 | 17.3 | 56.14 | 81.05 |
|
82 |
+
| LLaMA2-7B | 7B | 2T | 45.3 | 25.89 | 13.5 | 54.10 | 78.64 |
|
83 |
+
| LLaMA2-13B | 13B | 2T | 54.8 | 25.67 | 24.3 | 59.81 | 82.58 |
|
84 |
+
| MPT-7B | 7B | 1T | 26.8 | 25.67 | 8.3 | 46.67 | 77.41 |
|
85 |
+
| Falcon-7B | 7B | 1.5T | 26.2 | 23.66 | 4.9 | 47.61 | 78.23 |
|
86 |
+
| Pythia-12B | 12B | 300B | 26.5 | 24.33 | 4.09 | 40.61 | 68.83 |
|
87 |
+
| LLM360-Amber | 7B | 1.3T | 24.5 | 27.01 | 4.3 | 42.75 | 74.08 |
|
88 |
+
|
89 |
+
## **Citations**
|
90 |
+
If you use this model in your research, please cite it as follows:
|
91 |
+
```
|
92 |
+
@article{}
|
93 |
+
```
|