File size: 901 Bytes
fa044d9
 
d9391ac
fa044d9
d9391ac
118579b
 
ec8a708
 
64e7373
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
license: apache-2.0
library_name: transformers
---

Unofficial dequantized weight of [grok-1](https://huggingface.co/xai-org/grok-1) in HF Transformers format.

Note: If you haven't download the weight yet, please use the `fp32` revision instead which uses float32 precision for RMSNorm and Router layer for better consistency.

The (fp32) weights are converted using the [script here](https://gist.github.com/chu-tianxiang/ec310e15d56949fd0f351cb5f65ee7a1) ran inside the [grok-1 repo](https://github.com/xai-org/grok-1). Since downloading the dequantized weight needs twice as much time, it's recommended to download the original weight and convert on your own.

#### Benchmarks 
(I ran with `load_in_8bit` using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) due to limited hardware, so the result will be slightly worse)
* MMLU 5-shot: 0.7166
* BBH 3-shot: 0.5204