library_name: peft
datasets:
- shareGPT
tags:
- llama
inference: false
pipeline_tag: text-generation
llama-7b-glora 🦙
This model was built via parameter-efficient GLoRA finetuning of llama-7b on the shareGPT dataset. We adapt only the attention layers using GLoRA.
- Model license: This model is under a non-commercial license (see the LICENSE file) same as LLaMA.
- GLoRA implementation: script
Model Description
The architecture is similar to LLaMA-7B, but the bias is true for attention layers.
Limitations and Biases
The following language is modified from EleutherAI's GPT-NeoX-20B
This model can produce factually incorrect output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
How to Use
Install and import the package dependencies:
!pip install -q -U huggingface_hub transformers torch accelerate
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
Basic model loading:
model = AutoModelForCausalLM.from_pretrained(
"MBZUAI-LLM/LLaMA-7B-GLoRA-ShareGPT",
use_auth_token=True,
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("MBZUAI-LLM/LLaMA-7B-GLoRA-ShareGPT")
Once loaded, the model and tokenizer can be used with the following code:
def llama_generate(
model: AutoModelForCausalLM,
tokenizer: AutoTokenizer,
prompt: str,
max_new_tokens: int = 128,
temperature: float = 0.92,
) -> str:
"""
Initialize the pipeline
Uses Hugging Face GenerationConfig defaults
https://huggingface.co/docs/transformers/v4.29.1/en/main_classes/text_generation#transformers.GenerationConfig
Args:
model (transformers.AutoModelForCausalLM): Falcon model for text generation
tokenizer (transformers.AutoTokenizer): Tokenizer for model
prompt (str): Prompt for text generation
max_new_tokens (int, optional): Max new tokens after the prompt to generate. Defaults to 128.
temperature (float, optional): The value used to modulate the next token probabilities.
Defaults to 1.0
"""
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
inputs = tokenizer(
[prompt],
return_tensors="pt",
return_token_type_ids=False,
).to(
device
) # tokenize inputs, load on device
# when running Torch modules in lower precision, it is best practice to use the torch.autocast context manager.
with torch.autocast("cuda", dtype=torch.bfloat16):
response = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
temperature=temperature,
return_dict_in_generate=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
decoded_output = tokenizer.decode(
response["sequences"][0],
skip_special_tokens=True,
) # grab output in natural language
return decoded_output[len(prompt) :] # remove prompt from output
We can now generate text! For example:
prompt = "You are a helpful assistant. Tell me a recipe for vegan banana bread.\n"
response = llama_generate(
model,
tokenizer,
prompt,
max_new_tokens=500,
temperature=0.92,
)
print(response)
Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
Citation for GLoRA
@misc{chavan2023oneforall,
title={One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning},
author={Arnav Chavan and Zhuang Liu and Deepak Gupta and Eric Xing and Zhiqiang Shen},
year={2023},
eprint={2306.07967},
archivePrefix={arXiv},
primaryClass={cs.LG}
}