pglo commited on
Commit
c511077
1 Parent(s): 40c8655

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -52,13 +52,13 @@ Zamba2-2.7B-Instruct punches dramatically above its weight, achieving extremely
52
  <img src="https://cdn-uploads.huggingface.co/production/uploads/64e40335c0edca443ef8af3e/wXFMLXZA2-xz2PDyUMwTI.png" width="600"/>
53
 
54
 
55
- | Model | Size | MT-Bench | IFEval |
56
- |-------------|----|----|----|
57
- | **Zamba2-2.7B-Instruct** | 2.7B | **72.40** | **48.02** |
58
- | Mistral-7B-Instruct | 7B | 66.4 | 45.3 |
59
- | Gemma2-2B-Instruct | 2.7B | 51.69 | 42.20 |
60
- | H2O-Danube-4B-Chat | 4B | 52.57 | 37.96 |
61
- | StableLM-Zephyr-3B | 3B | 66.43 | 38.27 |
62
 
63
 
64
  Moreover, due to its unique hybrid SSM architecture, Zamba2-2.7B-Instruct achieves extremely low inference latency and rapid generation with a significantly smaller memory footprint than comparable transformer-based models.
 
52
  <img src="https://cdn-uploads.huggingface.co/production/uploads/64e40335c0edca443ef8af3e/wXFMLXZA2-xz2PDyUMwTI.png" width="600"/>
53
 
54
 
55
+ | Model | Size | MT-Bench | IFEval |
56
+ |---------------------------|-----:|---------:|---------:|
57
+ | **Zamba2-2.7B-Instruct** | 2.7B | **72.40**| **48.02**|
58
+ | Mistral-7B-Instruct | 7B| 66.4 | 45.3 |
59
+ | Gemma2-2B-Instruct | 2.7B | 51.69 | 42.20 |
60
+ | H2O-Danube-4B-Chat | 4B| 52.57 | 37.96 |
61
+ | StableLM-Zephyr-3B | 3B| 66.43 | 38.27 |
62
 
63
 
64
  Moreover, due to its unique hybrid SSM architecture, Zamba2-2.7B-Instruct achieves extremely low inference latency and rapid generation with a significantly smaller memory footprint than comparable transformer-based models.