File size: 2,920 Bytes
998adce
 
 
 
 
 
 
 
149b168
 
 
cdf62e0
 
414de03
 
 
 
968dbf3
cdf62e0
 
 
 
 
 
 
 
8df6308
149b168
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
license: cc
datasets:
- VMware/open-instruct-v1.1-oasst-dolly-hhrlhf
language:
- en
library_name: transformers
pipeline_tag: conversational
---

# VMware/open-llama-0.3T-7B-open-instruct-v1.1

## License
<ul>
<li>Fully Open Source, <b>Commerically viable.</b></li>
<li>The instruction dataset, [VMware/open-instruct-v1.1-oasst-dolly-hhrlhf](https://huggingface.co/datasets/VMware/open-instruct-v1.1-oasst-dolly-hhrlhf) is under cc-by-sa-3.0, and the Language Model ([openlm-research/open_llama_7b_preview_300bt](https://huggingface.co/openlm-research/open_llama_7b_preview_300bt/tree/main/open_llama_7b_preview_300bt_transformers_weights)) is under apache-2.0 License.</li>
</ul>

## Nomenclature 
<ul>
 <li> Model : Open-llama
 <li> Model trained on : 300B or 0.3 T tokens
 <li> Model Size: 7B parameters
 <li> Dataset: Open-instruct-v1.1 (oasst,dolly, hhrlhf)
</ul>

## Use in Transformers

Please load the tokenizer with 'add_bos_token = True' parameter as the underlying OpenLLaMa model and this model were trained with a BOS token. 

```
import os
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = 'VMware/open-llama-0.3T-7B-open-instruct-v1.1'


tokenizer = AutoTokenizer.from_pretrained(model_name, add_bos_token = True)

model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype= torch.float16, device_map = 'sequential')

prompt_template = "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:"

prompt=  'Explain in simple terms how the attention mechanism of a transformer model works'


inputt = prompt_template.format(instruction= prompt)
input_ids = tokenizer(inputt, return_tensors="pt").input_ids.to("cuda")

output1 = model.generate(input_ids, max_length=512)
input_length = input_ids.shape[1]
output1 = output1[:, input_length:]
output= tokenizer.decode(output1[0])

print(output)

'''
The attention mechanism of a transformer model is designed to help the model understand the relationship between different parts of a sentence.
The model uses a weighted attention score to determine how much each input token contributes to the output.
The attention score is calculated by looking at the similarity between each input token and the output token,and assigning a weight to each input token based on this similarity.
This way, the model can better understand the relationship between different parts of a sentence and generate more accurate predictions.

'''
```

## Drawbacks
<ul>
<li>The model was trained on a partially trained Open-LLaMA checkpoint. (300B tokens).
</li>The model is inconsistent with outputting '\n' tokens as majority of the dataset is obtained from [mosaicml/dolly_hhrlhf](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) and that dataset removed newline characters from responses.
</ul>

## Evaluation

<B>TODO</B>