ashrafulparan's picture
Update README.md
3b7e59c verified
metadata
library_name: peft
base_model: unsloth/llama-3-8b-bnb-4bit
license: apache-2.0
language:
  - en
pipeline_tag: zero-shot-classification
tags:
  - subjectiviy

ashrafulparan/llama-3-finetuned-for-subjectivity-english-lora

It is a finetuned version of llama-3 for subjectivity dataset. It will output 1 for Subjectivity and 0 for Objectivity.

Model Details

Model Hyperparameters

args = TrainingArguments(
    per_device_train_batch_size = 2,
    gradient_accumulation_steps = 4,
    warmup_steps = 5,
   num_train_epochs = 12,
    learning_rate = 5e-5,
    fp16 = not torch.cuda.is_bf16_supported(),
    bf16 = torch.cuda.is_bf16_supported(),
    logging_steps = 10,
    optim = "adamw_8bit",
    weight_decay = 0.001,
    lr_scheduler_type = "linear",
    seed = 3407,
    output_dir = "outputs",
    report_to = "none",)

Citation [optional]

BibTeX:

[More Information Needed]

Framework versions

  • PEFT 0.11.1