LLama-3 Bangla GGUF (q4_k_m)
- Developed by: KillerShoaib
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
- Datset used for fine-tuning : iamshnoo/alpaca-cleaned-bengali
GGUF (q4_k_m) format
This is GGUF (q4_k_m) format of Llama-3 8b bangla model. This format can be run on CPU using llama.cpp
and run locally with ollama
Llama-3 Bangla Different Formats
4-bit quantized (QLoRA)
: KillerShoaib/llama-3-8b-bangla-4bitLoRA adapters only
: KillerShoaib/llama-3-8b-bangla-lora
Model Details
Llama 3 8 billion model was finetuned using unsloth package on a cleaned Bangla alpaca dataset. After that the model was quantized in 4-bit. The model is finetuned for 2 epoch on a single T4 GPU.
Pros & Cons of the Model
Pros
- The model can comprehend the Bangla language, including its semantic nuances
- Given context model can answer the question based on the context
Cons
- Model is unable to do creative or complex work. i.e: creating a poem or solving a math problem in Bangla
- Since the size of the dataset was small, the model lacks lot of general knowledge in Bangla
Run the Model:
This will be updated very soon
- Downloads last month
- 37
Model tree for KillerShoaib/llama-3-8b-bangla-GGUF-Q4_K_M
Base model
meta-llama/Meta-Llama-3-8B
Quantized
unsloth/llama-3-8b-bnb-4bit