Upload model
Browse files- README.md +2 -111
- adapter_config.json +23 -0
- adapter_model.safetensors +3 -0
README.md
CHANGED
@@ -1,118 +1,9 @@
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
-
license: llama2
|
4 |
-
datasets:
|
5 |
-
- ehartford/dolphin
|
6 |
-
- garage-bAInd/Open-Platypus
|
7 |
-
tags:
|
8 |
-
- llama-2
|
9 |
-
inference: false
|
10 |
-
pipeline_tag: text-generation
|
11 |
-
---
|
12 |
-
|
13 |
-
# llama-2-7b-dolphin 🦙🐬
|
14 |
-
|
15 |
-
This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the first 5k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) and the first 5k riws of [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Finetuning was executed on 1x A100 (40 GB SXM) for roughly 1.3 hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
|
16 |
-
|
17 |
-
|
18 |
-
* Model license: Llama 2 Community License Agreement
|
19 |
-
* Basic usage: [notebook](assets/basic_inference_llama_2_7b_dolphin.ipynb)
|
20 |
-
* Finetuning script: [script](https://github.com/daniel-furman/sft-demos/blob/main/src/sft/one_gpu/llama-2/dolphin/sft-llama-2-7b-dolphin-peft.py)
|
21 |
-
* Loss curves: [plot](https://huggingface.co/dfurman/llama-2-7b-dolphin-peft#finetuning-description)
|
22 |
-
* Runtime stats: [table](https://huggingface.co/dfurman/llama-2-7b-dolphin-peft#runtime-tests)
|
23 |
-
|
24 |
-
### Example prompts and responses
|
25 |
-
|
26 |
-
Example 1:
|
27 |
-
|
28 |
-
**User**:
|
29 |
-
>You are a helpful assistant. Write me a numbered list of things to do in New York City.\n
|
30 |
-
|
31 |
-
**llama-2-7b-dolphin-peft**:
|
32 |
-
|
33 |
-
coming
|
34 |
-
|
35 |
-
<br>
|
36 |
-
|
37 |
-
Example 2:
|
38 |
-
|
39 |
-
**User**:
|
40 |
-
>You are a helpful assistant. Write a short email inviting my friends to a dinner party on Friday. Respond succinctly.\n"
|
41 |
-
|
42 |
-
**llama-2-7b-dolphin-peft**:
|
43 |
-
|
44 |
-
coming
|
45 |
-
|
46 |
-
<br>
|
47 |
-
|
48 |
-
## Model Description
|
49 |
-
|
50 |
-
The architecture is a modification of a standard decoder-only transformer.
|
51 |
-
|
52 |
-
The llama-2-7b models have been modified from a standard transformer in the following ways:
|
53 |
-
* It uses the [SwiGLU activation function](https://arxiv.org/abs/2002.05202)
|
54 |
-
* It uses [rotary positional embeddings](https://arxiv.org/abs/2104.09864) (RoPE)
|
55 |
-
|
56 |
-
| Hyperparameter | Value |
|
57 |
-
|----------------|-------|
|
58 |
-
| n_parameters | 7B |
|
59 |
-
| tokens | 2.0T |
|
60 |
-
| vocab size | 32000 |
|
61 |
-
| sequence length | 4096 |
|
62 |
-
|
63 |
-
## Finetuning Description
|
64 |
-
|
65 |
-
![loss curves](https://raw.githubusercontent.com/daniel-furman/sft-demos/main/assets/jul_24_23_1_13_00_log_loss_curves_llama-2-7b-dolphin.png)
|
66 |
-
|
67 |
-
The above loss curve was generated from the run's private wandb.ai log.
|
68 |
-
|
69 |
-
## PreTraining Data
|
70 |
-
|
71 |
-
For more details on the pretraining process, see [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf).
|
72 |
-
|
73 |
-
The data was tokenized using the [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) tokenizer.
|
74 |
-
|
75 |
-
## Limitations and Biases
|
76 |
-
|
77 |
-
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
|
78 |
-
|
79 |
-
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
|
80 |
-
This model was trained on various public datasets.
|
81 |
-
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
|
82 |
-
|
83 |
-
|
84 |
-
## How to Use
|
85 |
-
|
86 |
-
coming
|
87 |
-
|
88 |
-
### Runtime tests
|
89 |
-
|
90 |
-
coming
|
91 |
-
|
92 |
-
## Acknowledgements
|
93 |
-
|
94 |
-
This model was finetuned by Daniel Furman on Sep 10, 2023 and is intended primarily for research purposes.
|
95 |
-
|
96 |
-
## Disclaimer
|
97 |
-
|
98 |
-
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
|
99 |
-
|
100 |
-
## Meta citation for llama-2 blog
|
101 |
-
|
102 |
-
```
|
103 |
-
@online{Meta2023Introducing,
|
104 |
-
author = {Meta AI},
|
105 |
-
title = {Meta and Microsoft Introduce the Next Generation of Llama},
|
106 |
-
year = {2023},
|
107 |
-
url = {https://about.fb.com/news/2023/07/llama-2/},
|
108 |
-
note = {Accessed: 2023-07-24},
|
109 |
-
urldate = {2023-07-24}
|
110 |
-
}
|
111 |
-
```
|
112 |
-
|
113 |
---
|
|
|
114 |
|
115 |
### Framework versions
|
116 |
|
117 |
|
118 |
-
- PEFT 0.
|
|
|
1 |
---
|
2 |
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
+
## Training procedure
|
5 |
|
6 |
### Framework versions
|
7 |
|
8 |
|
9 |
+
- PEFT 0.6.0.dev0
|
adapter_config.json
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"auto_mapping": null,
|
3 |
+
"base_model_name_or_path": "meta-llama/Llama-2-7b-hf",
|
4 |
+
"bias": "none",
|
5 |
+
"fan_in_fan_out": false,
|
6 |
+
"inference_mode": true,
|
7 |
+
"init_lora_weights": true,
|
8 |
+
"layers_pattern": null,
|
9 |
+
"layers_to_transform": null,
|
10 |
+
"lora_alpha": 16,
|
11 |
+
"lora_dropout": 0.1,
|
12 |
+
"modules_to_save": null,
|
13 |
+
"peft_type": "LORA",
|
14 |
+
"r": 64,
|
15 |
+
"revision": null,
|
16 |
+
"target_modules": [
|
17 |
+
"q_proj",
|
18 |
+
"k_proj",
|
19 |
+
"v_proj",
|
20 |
+
"o_proj"
|
21 |
+
],
|
22 |
+
"task_type": "CAUSAL_LM"
|
23 |
+
}
|
adapter_model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0c8bf9562024a294e6cf0668d48975fa3599dedb86d94d267acc4d1f1cd9bd42
|
3 |
+
size 268470272
|