Update README.md
Browse files
README.md
CHANGED
@@ -77,7 +77,7 @@ The `untrained-special-tokens-fixed` branch is the same model as the main branch
|
|
77 |
| Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Special Tokens Fixed | Description |
|
78 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ------- | ---- |
|
79 |
| [main](https://huggingface.co/astronomer-io/Llama-3-8B-GPTQ-4-Bit/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 5.74 GB | Yes | No | 4-bit, with Act Order and group size 128g. Smallest model possible with small accuracy loss |
|
80 |
-
| [untrained-special-tokens-fixed](https://huggingface.co/astronomer-io/Llama-3-8B-GPTQ-4-Bit/tree/untrained-special-tokens-fixed) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 5.74 GB | Yes | Yes |
|
81 |
| More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. |
|
82 |
|
83 |
## Serving this GPTQ model using vLLM
|
|
|
77 |
| Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Special Tokens Fixed | Description |
|
78 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ------- | ---- |
|
79 |
| [main](https://huggingface.co/astronomer-io/Llama-3-8B-GPTQ-4-Bit/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 5.74 GB | Yes | No | 4-bit, with Act Order and group size 128g. Smallest model possible with small accuracy loss |
|
80 |
+
| [untrained-special-tokens-fixed](https://huggingface.co/astronomer-io/Llama-3-8B-GPTQ-4-Bit/tree/untrained-special-tokens-fixed) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 5.74 GB | Yes | Yes | Same as the main branch. The special tokens that were untrained causing exploding gradients/NaN gradients have had their embedding values set to the average of trained tokens for each feature |
|
81 |
| More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. |
|
82 |
|
83 |
## Serving this GPTQ model using vLLM
|