RichardErkhov
commited on
Commit
•
23ffb29
1
Parent(s):
431068a
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
Jamba-tiny-random - bnb 4bits
|
11 |
+
- Model creator: https://huggingface.co/ai21labs/
|
12 |
+
- Original model: https://huggingface.co/ai21labs/Jamba-tiny-random/
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
Original model description:
|
18 |
+
---
|
19 |
+
license: apache-2.0
|
20 |
+
---
|
21 |
+
|
22 |
+
This is a tiny, dummy version of [Jamba](https://huggingface.co/ai21labs/Jamba-v0.1), used for debugging and experimentation over the Jamba architecture.
|
23 |
+
|
24 |
+
It has 128M parameters (instead of 52B), **and is initialized with random weights and did not undergo any training.**
|
25 |
+
|
26 |
+
|