Safetensors
transformers_zamba2
zamba2
BerenMillidge commited on
Commit
595b350
1 Parent(s): 6b98f5f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -8
README.md CHANGED
@@ -62,8 +62,7 @@ print(tokenizer.decode(outputs[0]))
62
 
63
  Zamba2-1.2B utilizes and extends our original Zamba hybrid SSM-attention architecture. The core Zamba architecture consists of a backbone of Mamba layers interleaved with one or more shared attention layers (one shared attention in Zamba1, two in Zamba2). This attention has shared weights to minimize the parameter cost of the model. We find that concatenating the original model embeddings to the input to this attention block improves performance, likely due to better maintenance of information across depth. The Zamba2 architecture also applies LoRA projection matrices to the shared transformer blocks to gain some additional expressivity in each block and allow each shared block to specialize slightly to its own unique position while keeping the additional parameter overhead small.
64
 
65
- TODO
66
-
67
 
68
  ## Performance
69
 
@@ -71,16 +70,16 @@ Zamba2-1.2B achieves leading and state-of-the-art performance among models of <3
71
 
72
  Zamba2-1.2B's high performance and small inference compute and memory footprint renders it an ideal generalist model for on-device applications.
73
 
74
- TODO
75
-
76
- TODO
77
-
78
  Time to First Token (TTFT) | Output Generation
79
  :-------------------------:|:-------------------------:
80
- ![](https://cdn-uploads.huggingface.co/production/uploads/65bc13717c6ad1994b6619e9/BmE8X6tDNVw5OJcbZt8sZ.png) | ![](https://cdn-uploads.huggingface.co/production/uploads/65bc13717c6ad1994b6619e9/wECc9cItK1FW1MOMGSLrp.png)
 
 
 
 
 
81
 
82
 
83
- TODO
84
 
85
  ## Notice
86
 
 
62
 
63
  Zamba2-1.2B utilizes and extends our original Zamba hybrid SSM-attention architecture. The core Zamba architecture consists of a backbone of Mamba layers interleaved with one or more shared attention layers (one shared attention in Zamba1, two in Zamba2). This attention has shared weights to minimize the parameter cost of the model. We find that concatenating the original model embeddings to the input to this attention block improves performance, likely due to better maintenance of information across depth. The Zamba2 architecture also applies LoRA projection matrices to the shared transformer blocks to gain some additional expressivity in each block and allow each shared block to specialize slightly to its own unique position while keeping the additional parameter overhead small.
64
 
65
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/Vay6htbnBcySR3Z6NEgwj.png)
 
66
 
67
  ## Performance
68
 
 
70
 
71
  Zamba2-1.2B's high performance and small inference compute and memory footprint renders it an ideal generalist model for on-device applications.
72
 
 
 
 
 
73
  Time to First Token (TTFT) | Output Generation
74
  :-------------------------:|:-------------------------:
75
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/5lpWDLdtPPVAk8COJq7gZ.png) | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/V2tS6eCOGbpKybEoZmOB7.png)
76
+
77
+
78
+ And memory overhead
79
+
80
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/m0YUmAmiVnRg6l9m10CEt.png)
81
 
82
 
 
83
 
84
  ## Notice
85