File size: 1,255 Bytes
edf27a3 3a19f49 b60c5a5 3a19f49 edf27a3 3b7e59c edf27a3 3b7e59c edf27a3 d4b9758 edf27a3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
library_name: peft
base_model: unsloth/llama-3-8b-bnb-4bit
license: apache-2.0
language:
- en
pipeline_tag: zero-shot-classification
tags:
- subjectiviy
---
# ashrafulparan/llama-3-finetuned-for-subjectivity-english-lora
<!-- Provide a quick summary of what the model is/does. -->
It is a finetuned version of llama-3 for subjectivity dataset.
It will output 1 for Subjectivity and 0 for Objectivity.
## Model Details
### Model Hyperparameters
args = TrainingArguments(
per_device_train_batch_size = 2,
gradient_accumulation_steps = 4,
warmup_steps = 5,
num_train_epochs = 12,
learning_rate = 5e-5,
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
logging_steps = 10,
optim = "adamw_8bit",
weight_decay = 0.001,
lr_scheduler_type = "linear",
seed = 3407,
output_dir = "outputs",
report_to = "none",)
<!-- Provide a longer summary of what this model is. -->
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |