student-abdullah's picture
Update README.md
70bcfe4 verified
---
base_model: meta-llama/Llama-3.2-1B
datasets:
- student-abdullah/BigPharma_Generic_Q-A_Format_Augemented_Dataset
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- torch
- trl
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** student-abdullah
- **License:** apache-2.0
- **Finetuned from model:** meta-llama/Llama-3.2-1B
- **Created on:** 9th October, 2024
---
# Acknowledgement
<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>
---
# Model Description
This model is fine-tuned from the meta-llama/Llama-3.2-1B base model to enhance its capabilities in generating relevant and accurate responses related to generic medications under the PMBJP scheme. The fine-tuning process included the following hyperparameters:
- Fine Tuning Template: Llama Q&A
- Max Tokens: 1024
- LoRA Alpha: 2
- LoRA Rank (r): 1024
- Learning rate: 5e-5
- Gradient Accumulation Steps: 1
- Batch Size: 8
- Quantization: None
---
# Model Quantitative Performace
- Training Quantitative Loss: 0.1376 (at final 10th epoch 5150th Step)
---
# Limitations
- Token Limitations: With a max token limit of 1024, the model might not handle very long queries or contexts effectively.
- Training Data Limitations: The model’s performance is contingent on the quality and coverage of the fine-tuning dataset, which may affect its generalizability to different contexts or medications not covered in the dataset.
- Potential Biases: As with any model fine-tuned on specific data, there may be biases based on the dataset used for training.
---
# Model Performace Evaluation:
- Evaluation on 1000 Questions based on dataset (to evaluate the finetuned knowledge base)
- At temperature 0.3
- Correct Responses: %
- Incorrect Responses: %
<p align="center">
<img src="" width="20%" style="display:inline-block;"/>
<img src="" width="35%" style="display:inline-block;"/>
<img src="" width="35%" style="display:inline-block;"/>
</p>