Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- README.md +64 -0
- llama2_7b_chat_uncensored.Q4_0.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
llama2_7b_chat_uncensored.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
datasets:
|
4 |
+
- georgesung/wizard_vicuna_70k_unfiltered
|
5 |
+
---
|
6 |
+
|
7 |
+
# Overview
|
8 |
+
Fine-tuned [Llama-2 7B](https://huggingface.co/TheBloke/Llama-2-7B-fp16) with an uncensored/unfiltered Wizard-Vicuna conversation dataset (originally from [ehartford/wizard_vicuna_70k_unfiltered](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)).
|
9 |
+
Used QLoRA for fine-tuning. Trained for one epoch on a 24GB GPU (NVIDIA A10G) instance, took ~19 hours to train.
|
10 |
+
|
11 |
+
The version here is the fp16 HuggingFace model.
|
12 |
+
|
13 |
+
## GGML & GPTQ versions
|
14 |
+
Thanks to [TheBloke](https://huggingface.co/TheBloke), he has created the GGML and GPTQ versions:
|
15 |
+
* https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGML
|
16 |
+
* https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GPTQ
|
17 |
+
|
18 |
+
## Running in Ollama
|
19 |
+
https://ollama.com/library/llama2-uncensored
|
20 |
+
|
21 |
+
# Prompt style
|
22 |
+
The model was trained with the following prompt style:
|
23 |
+
```
|
24 |
+
### HUMAN:
|
25 |
+
Hello
|
26 |
+
|
27 |
+
### RESPONSE:
|
28 |
+
Hi, how are you?
|
29 |
+
|
30 |
+
### HUMAN:
|
31 |
+
I'm fine.
|
32 |
+
|
33 |
+
### RESPONSE:
|
34 |
+
How can I help you?
|
35 |
+
...
|
36 |
+
```
|
37 |
+
|
38 |
+
# Training code
|
39 |
+
Code used to train the model is available [here](https://github.com/georgesung/llm_qlora).
|
40 |
+
|
41 |
+
To reproduce the results:
|
42 |
+
```
|
43 |
+
git clone https://github.com/georgesung/llm_qlora
|
44 |
+
cd llm_qlora
|
45 |
+
pip install -r requirements.txt
|
46 |
+
python train.py configs/llama2_7b_chat_uncensored.yaml
|
47 |
+
```
|
48 |
+
|
49 |
+
# Fine-tuning guide
|
50 |
+
https://georgesung.github.io/ai/qlora-ift/
|
51 |
+
|
52 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
53 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_georgesung__llama2_7b_chat_uncensored)
|
54 |
+
|
55 |
+
| Metric | Value |
|
56 |
+
|-----------------------|---------------------------|
|
57 |
+
| Avg. | 43.39 |
|
58 |
+
| ARC (25-shot) | 53.58 |
|
59 |
+
| HellaSwag (10-shot) | 78.66 |
|
60 |
+
| MMLU (5-shot) | 44.49 |
|
61 |
+
| TruthfulQA (0-shot) | 41.34 |
|
62 |
+
| Winogrande (5-shot) | 74.11 |
|
63 |
+
| GSM8K (5-shot) | 5.84 |
|
64 |
+
| DROP (3-shot) | 5.69 |
|
llama2_7b_chat_uncensored.Q4_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6db24c84ba97279b0c3494abf0c839d95e68dd3ee74fb634f00fc528378ada59
|
3 |
+
size 3825807456
|