File size: 1,055 Bytes
7560536
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
license: apache-2.0
datasets:
- ihalage/sinhala-instruction-finetune-large
language:
- si
- en
---
[![llama3-sinhala](https://img.shields.io/badge/llama3--sinhala-github-blue)](https://github.com/ihalage/llama3-sinhala)

LLaMA3 (8B) model instruction finetuned to understand and respond in Sinhala language. `meta-llama/Meta-Llama-3-8B-Instruct` is finetuned on a reletively large dataset in
Sinhala compiled by translating English datasets such as ELI5 and Alpaca. The dataset in hosted in Hugging Face Datasets hub [(`sinhala-instruction-finetune-large`)](https://huggingface.co/datasets/ihalage/sinhala-instruction-finetune-large)

The original model is 4-bit quantized and finetuned with a causal language modelling (CLM) objective by adding LoRA adapters with a rank of 16 and a scaling factor of 32.

The finetuned `llama3-sinhala` model generates better responses in Sinhala compared to the original instruction finetuned model released by Meta. 
See the github repo [llama3-sinhala](https://github.com/ihalage/llama3-sinhala) for more details.