ashrafulparan/llama-3-finetuned-for-subjectivity-english-lora
It is a finetuned version of llama-3 for subjectivity dataset. It will output 1 for Subjectivity and 0 for Objectivity.
Model Details
Model Hyperparameters
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
warmup_steps = 5,
num_train_epochs = 12,
learning_rate = 5e-5,
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
logging_steps = 10,
optim = "adamw_8bit",
weight_decay = 0.001,
lr_scheduler_type = "linear",
seed = 3407,
output_dir = "outputs",
report_to = "none",)
Citation [optional]
BibTeX:
[More Information Needed]
Framework versions
- PEFT 0.11.1
- Downloads last month
- 3
Inference API (serverless) does not yet support peft models for this pipeline type.
Model tree for ashrafulparan/llama-3-finetuned-for-subjectivity-english-lora
Base model
meta-llama/Meta-Llama-3-8B
Quantized
unsloth/llama-3-8b-bnb-4bit