File size: 4,310 Bytes
e986386
 
 
 
 
3b11e9e
e986386
 
 
425f98f
e986386
3b11e9e
e986386
3b11e9e
e986386
 
 
 
3b11e9e
e986386
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3b11e9e
e986386
 
 
 
3b11e9e
e986386
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3b11e9e
e986386
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
---
library_name: peft
datasets:
- shareGPT
tags:
- llama2
inference: false
pipeline_tag: text-generation
---
# llama2-7b-glora 🦙

This model was built via parameter-efficient GLoRA finetuning of [llama2-7b](https://huggingface.co/meta-llama/Llama-2-7b) on the shareGPT dataset. We adapt only the attention layers using GLoRA.

* Model license: This model is under a same license (see the LICENSE file) as LLaMA2.
* GLoRA implementation: [script](https://github.com/Arnav0400/peft/blob/main/src/peft/tuners/glora.py)

## Model Description

The architecture is similar to LLaMA2-7B, but the bias is true for attention layers.

## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_

This model can produce factually incorrect output, and should not be relied on to produce factually accurate information.
This model was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

## How to Use

Install and import the package dependencies:  

```python
!pip install -q -U huggingface_hub transformers torch accelerate
```

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
```

Basic model loading:

```python
model = AutoModelForCausalLM.from_pretrained(
    "MBZUAI-LLM/LLaMA2-7B-GLoRA-ShareGPT",
    use_auth_token=True,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("MBZUAI-LLM/LLaMA2-7B-GLoRA-ShareGPT")
```

Once loaded, the model and tokenizer can be used with the following code:

```python
def llama_generate(
    model: AutoModelForCausalLM,
    tokenizer: AutoTokenizer,
    prompt: str,
    max_new_tokens: int = 128,
    temperature: float = 0.92,
) -> str:
    """
    Initialize the pipeline
    Uses Hugging Face GenerationConfig defaults
        https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/text_generation#transformers.GenerationConfig
    Args:
        model (transformers.AutoModelForCausalLM): Model for text generation
        tokenizer (transformers.AutoTokenizer): Tokenizer for model
        prompt (str): Prompt for text generation
        max_new_tokens (int, optional): Max new tokens after the prompt to generate. Defaults to 128.
        temperature (float, optional): The value used to modulate the next token probabilities.
            Defaults to 1.0
    """
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    inputs = tokenizer(
        [prompt],
        return_tensors="pt",
        return_token_type_ids=False,
    ).to(
        device
    )  # tokenize inputs, load on device
    # when running Torch modules in lower precision, it is best practice to use the torch.autocast context manager.
    with torch.autocast("cuda", dtype=torch.bfloat16):
        response = model.generate(
            **inputs,
            max_new_tokens=max_new_tokens,
            temperature=temperature,
            return_dict_in_generate=True,
            eos_token_id=tokenizer.eos_token_id,
            pad_token_id=tokenizer.pad_token_id,
        )
    decoded_output = tokenizer.decode(
        response["sequences"][0],
        skip_special_tokens=True,
    )  # grab output in natural language
    return decoded_output[len(prompt) :]  # remove prompt from output
```

We can now generate text! For example:

```python
prompt = "You are a helpful assistant. Tell me a recipe for vegan banana bread.\n"
response = llama_generate(
    model,
    tokenizer,
    prompt,
    max_new_tokens=500,
    temperature=0.92,
)
print(response)
```

## Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

## Citation for GLoRA

```
@misc{chavan2023oneforall,
      title={One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning}, 
      author={Arnav Chavan and Zhuang Liu and Deepak Gupta and Eric Xing and Zhiqiang Shen},
      year={2023},
      eprint={2306.07967},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
```

---