Teja-Gollapudi
commited on
Commit
•
fff4f61
1
Parent(s):
3124762
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc
|
3 |
+
datasets:
|
4 |
+
- VMware/open-instruct-v1-oasst-dolly-hhrlhf
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
library_name: transformers
|
8 |
+
pipeline_tag: text-generation
|
9 |
+
---
|
10 |
+
|
11 |
+
# VMware/open-llama-13B-open-instruct
|
12 |
+
Instruction-tuned version of the fully trained Open LLama 13B model. The model is open for <b>COMMERCIAL USE</b>. <br>
|
13 |
+
|
14 |
+
<b> NOTE </b> : The model was trained using the Alpaca prompt template
|
15 |
+
<b> NOTE </b> : Fast tokenizer results in incorrect encoding, set the ```use_fast = False``` parameter, when instantiating the tokenizer
|
16 |
+
<b> NOTE </b> : The model might struggle with code as the tokenizer merges multiple spaces
|
17 |
+
|
18 |
+
## License
|
19 |
+
- <b>Commercially Viable </b>
|
20 |
+
- Instruction dataset, [VMware/open-instruct-v1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0
|
21 |
+
- Language Model, ([openlm-research/open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)) is under apache-2.0
|
22 |
+
|
23 |
+
|
24 |
+
## Nomenclature
|
25 |
+
|
26 |
+
- Model : Open-llama
|
27 |
+
- Model Size: 13B parameters
|
28 |
+
- Dataset: Open-instruct-v1 (oasst,dolly, hhrlhf)
|
29 |
+
|
30 |
+
## Use in Transformers
|
31 |
+
|
32 |
+
```
|
33 |
+
import os
|
34 |
+
import torch
|
35 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
36 |
+
|
37 |
+
model_name = 'VMware/open-llama-13b-open-instruct'
|
38 |
+
|
39 |
+
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
|
41 |
+
|
42 |
+
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map='sequential')
|
43 |
+
|
44 |
+
prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"
|
45 |
+
|
46 |
+
prompt = 'Explain in simple terms how the attention mechanism of a transformer model works'
|
47 |
+
|
48 |
+
|
49 |
+
inputt = prompt_template.format(instruction= prompt)
|
50 |
+
input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")
|
51 |
+
|
52 |
+
output1 = model.generate(input_ids, max_length=512)
|
53 |
+
input_length = input_ids.shape[1]
|
54 |
+
output1 = output1[:, input_length:]
|
55 |
+
output = tokenizer.decode(output1[0])
|
56 |
+
|
57 |
+
print(output)
|
58 |
+
```
|
59 |
+
|
60 |
+
## Finetuning details
|
61 |
+
The finetuning scripts will be available in our [RAIL Github Repository](https://github.com/vmware-labs/research-and-development-artificial-intelligence-lab/tree/main/instruction-tuning)
|
62 |
+
## Evaluation
|
63 |
+
|
64 |
+
<B>TODO</B>
|