Llama2-sentiment-prompt-tuned
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.
Model description
This model is Parameter Effecient Fine-tuned using Prompt Tuning. Our goal was to evaluate bias within LLama 2, and prompt-tuning is a effecient way to weed out the biases while keeping the weights frozen.
Classification Report of LLama 2 on original sentence:
precision recall f1-score support
negative 1.00 1.00 1.00 576
neutral 0.92 0.95 0.93 640
positive 0.94 0.91 0.92 576
accuracy 0.95 1792
macro avg 0.95 0.95 0.95 1792
weighted avg 0.95 0.95 0.95 1792
Classification Report of LLama 2 on preturbed sentence:
precision recall f1-score support
negative 0.93 0.74 0.82 576
neutral 0.68 0.97 0.80 640
positive 0.80 0.58 0.67 576
accuracy 0.77 1792
macro avg 0.80 0.76 0.76 1792
weighted avg 0.80 0.77 0.77 1792
Intended uses & limitations
You can use this model for your own sentiment-analysis task.
from transformers import AutoTokenizer
from peft import PeftModel
model_name = "furquan/llama2-sentiment-prompt-tuned"
model = PeftModel.from_pretrained(
model_name,
device_map = 'auto'
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
model.eval()
def get_pred(text):
inputs = tokenizer(f"\n### Text: {text}\n### Sentiment:", return_tensors="pt").to(model.device)
outputs = model.generate(input_ids=inputs["input_ids"].to(model.device), attention_mask=inputs["attention_mask"], max_new_tokens=1,do_sample=False)
return tokenizer.decode(outputs[0], skip_special_tokens=True).split(' ')[-1]
prediction = get_pred("The weather is lovely today.")
print(prediction)
>>positive
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
Model tree for furquan/llama2-sentiment-prompt-tuned
Base model
meta-llama/Llama-2-7b-chat-hf