File size: 2,237 Bytes
3bd7bd6 51d6404 3bd7bd6 bb8ceb9 3bd7bd6 bb8ceb9 3bd7bd6 bb8ceb9 3bd7bd6 bb8ceb9 3bd7bd6 bb8ceb9 01ffd01 bb8ceb9 ac6dbde bb8ceb9 01ffd01 bb8ceb9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
---
library_name: peft
license: mit
datasets:
- timdettmers/openassistant-guanaco
- tatsu-lab/alpaca
- BI55/MedText
language:
- en
pipeline_tag: question-answering
---
Here is a README.md explaining how to run the Archimedes model locally:
# Archimedes Model
This README provides instructions for running the Archimedes conversational AI assistant locally.
## Requirements
- Python 3.6+
- [Transformers](https://huggingface.co/docs/transformers/installation)
- [Peft](https://github.com/hazyresearch/peft)
- PyTorch
- Access to the LLAMA 2 model files or a cloned public model
Install requirements:
```
!pip install transformers
!pip install peft
!pip install torch
!pip install datasets
!pip install bitsandbytes
```
## Usage
```python
import transformers
from peft import LoraConfig, get_peft_model
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
login() # Need access to the gated model.
# Load LLAMA 2 model
model_name = "meta-llama/Llama-2-7b-chat-hf"
# Quantization configuration
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
)
# Load model
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
trust_remote_code=True
)
# Load LoRA configuration
lora_config = LoraConfig.from_pretrained('harpyerr/archimedes-300s-7b-chat')
model = get_peft_model(model, lora_config)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
# Define prompt
text = "Can you tell me who made Space-X?"
prompt = "You are a helpful assistant. Please provide an informative response. \n\n" + text
# Generate response
device = "cuda:0"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
This loads the LLAMA 2 model, applies 4-bit quantization and LoRA optimizations, constructs a prompt, and generates a response.
See the [docs](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForCausalLM) for more details.
|