Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,10 @@ datasets:
|
|
15 |
- grimulkan/LimaRP-augmented
|
16 |
- mpasila/LimaRP-augmented-8k-context
|
17 |
---
|
|
|
|
|
|
|
|
|
18 |
This is a merge of [mpasila/Llama-3-LimaRP-LoRA-8B](https://huggingface.co/mpasila/Llama-3-LimaRP-LoRA-8B).
|
19 |
|
20 |
LoRA trained in 4-bit with 8k context using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B/) as the base model for 1 epoch.
|
|
|
15 |
- grimulkan/LimaRP-augmented
|
16 |
- mpasila/LimaRP-augmented-8k-context
|
17 |
---
|
18 |
+
This is an ExLlamaV2 quantized model in 4bpw of [mpasila/Llama-3-LimaRP-8B](https://huggingface.co/mpasila/Llama-3-LimaRP-8B) using the default calibration dataset.
|
19 |
+
|
20 |
+
# Original Model card:
|
21 |
+
|
22 |
This is a merge of [mpasila/Llama-3-LimaRP-LoRA-8B](https://huggingface.co/mpasila/Llama-3-LimaRP-LoRA-8B).
|
23 |
|
24 |
LoRA trained in 4-bit with 8k context using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B/) as the base model for 1 epoch.
|