Update README.md
Browse files
README.md
CHANGED
@@ -23,8 +23,8 @@ In addition, Doge uses Dynamic Mask Attention as sequence transformation and can
|
|
23 |
```python
|
24 |
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
25 |
|
26 |
-
>>> tokenizer = AutoTokenizer.from_pretrained("
|
27 |
-
>>> model = AutoModelForCausalLM.from_pretrained("
|
28 |
>>> inputs = tokenizer("Hey how are you doing?", return_tensors="pt")
|
29 |
|
30 |
>>> out = model.generate(**inputs, max_new_tokens=100)
|
@@ -36,9 +36,9 @@ In addition, Doge uses Dynamic Mask Attention as sequence transformation and can
|
|
36 |
|
37 |
We build the Doge by doing Per-Training on [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus).
|
38 |
|
39 |
-
> NOTE: If you want to continue pre-training this model, you can find the unconverged checkpoint [here](https://huggingface.co/
|
40 |
|
41 |
-
> NOTE: These models has not been fine-tuned for instruction, the instruction model is [here](https://huggingface.co/
|
42 |
|
43 |
> TODO: The larger model is under training and will be uploaded soon.
|
44 |
|
@@ -46,15 +46,15 @@ We build the Doge by doing Per-Training on [Smollm-Corpus](https://huggingface.c
|
|
46 |
|
47 |
| Model | Training Data | Steps | Content Length | Tokens | LR | Batch Size | Precision |
|
48 |
|---|---|---|---|---|---|---|---|
|
49 |
-
| [Doge-20M](https://huggingface.co/
|
50 |
-
| [Doge-60M](https://huggingface.co/
|
51 |
|
52 |
**Evaluation**:
|
53 |
|
54 |
| Model | MMLU | TriviaQA | ARC-E | ARC-C | PIQA | HellaSwag | OBQA | Winogrande | tokens / s on CPU |
|
55 |
|---|---|---|---|---|---|---|---|---|---|
|
56 |
-
| [Doge-20M](https://huggingface.co/
|
57 |
-
| [Doge-60M](https://huggingface.co/
|
58 |
|
59 |
> All evaluations are done using five-shot settings, without additional training on the benchmarks.
|
60 |
|
|
|
23 |
```python
|
24 |
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
|
25 |
|
26 |
+
>>> tokenizer = AutoTokenizer.from_pretrained("SmallDoge/Doge-60M")
|
27 |
+
>>> model = AutoModelForCausalLM.from_pretrained("SmallDoge/Doge-60M", trust_remote_code=True)
|
28 |
>>> inputs = tokenizer("Hey how are you doing?", return_tensors="pt")
|
29 |
|
30 |
>>> out = model.generate(**inputs, max_new_tokens=100)
|
|
|
36 |
|
37 |
We build the Doge by doing Per-Training on [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus).
|
38 |
|
39 |
+
> NOTE: If you want to continue pre-training this model, you can find the unconverged checkpoint [here](https://huggingface.co/SmallDoge/Doge-60M-checkpoint).
|
40 |
|
41 |
+
> NOTE: These models has not been fine-tuned for instruction, the instruction model is [here](https://huggingface.co/SmallDoge/Doge-60M-Instruct).
|
42 |
|
43 |
> TODO: The larger model is under training and will be uploaded soon.
|
44 |
|
|
|
46 |
|
47 |
| Model | Training Data | Steps | Content Length | Tokens | LR | Batch Size | Precision |
|
48 |
|---|---|---|---|---|---|---|---|
|
49 |
+
| [Doge-20M](https://huggingface.co/SmallDoge/Doge-20M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 8k | 2048 | 4B | 8e-3 | 0.5M | bfloat16 |
|
50 |
+
| [Doge-60M](https://huggingface.co/SmallDoge/Doge-60M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 16k | 2048 | 16B | 6e-3 | 1M | bfloat16 |
|
51 |
|
52 |
**Evaluation**:
|
53 |
|
54 |
| Model | MMLU | TriviaQA | ARC-E | ARC-C | PIQA | HellaSwag | OBQA | Winogrande | tokens / s on CPU |
|
55 |
|---|---|---|---|---|---|---|---|---|---|
|
56 |
+
| [Doge-20M](https://huggingface.co/SmallDoge/Doge-20M) | 25.43 | 0.03 | 36.83 | 22.78 | 58.38 | 27.25 | 25.60 | 50.20 | 142 |
|
57 |
+
| [Doge-60M](https://huggingface.co/SmallDoge/Doge-60M) | 26.41 | 0.18 | 50.46 | 25.34 | 61.43 | 31.45 | 28.00 | 50.75 | 62 |
|
58 |
|
59 |
> All evaluations are done using five-shot settings, without additional training on the benchmarks.
|
60 |
|