kecik
commited on
Commit
β’
2dfe295
1
Parent(s):
a8d2a2b
Added measurements.json, updated readme
Browse files- README.md +18 -1
- measurements.json +0 -0
README.md
CHANGED
@@ -1,3 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
Special thanks to Alpin, Tav and the rest of the Pygmalion peeps involved in training this one. Its trained on the supercot dataset like my other qloras and models. I'll update the card with more info soon.
|
2 |
|
3 |
-
Might be a bit overbaked π§βπ³π₯
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
tags:
|
5 |
+
- tag1
|
6 |
+
- tag2
|
7 |
+
license: "any valid license identifier"
|
8 |
+
base_model: meta-llama/Llama-2-70b-hf
|
9 |
+
---
|
10 |
+
|
11 |
+
4.65bpw exl2 quant of [ausboss/SuperCOT-70B](https://huggingface.co/ausboss/SuperCOT-70B/tree/main).
|
12 |
+
Calibration done using [wikitext](https://huggingface.co/datasets/wikitext/blob/refs%2Fconvert%2Fparquet/wikitext-103-v1/test/0000.parquet)
|
13 |
+
measurements.json file included in repo.
|
14 |
+
|
15 |
+
Original model card below:
|
16 |
+
|
17 |
+
|
18 |
Special thanks to Alpin, Tav and the rest of the Pygmalion peeps involved in training this one. Its trained on the supercot dataset like my other qloras and models. I'll update the card with more info soon.
|
19 |
|
20 |
+
Might be a bit overbaked π§βπ³π₯
|
measurements.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|