Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,17 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- ihalage/sinhala-instruction-finetune-large
|
5 |
+
language:
|
6 |
+
- si
|
7 |
+
- en
|
8 |
+
---
|
9 |
+
[![llama3-sinhala](https://img.shields.io/badge/llama3--sinhala-github-blue)](https://github.com/ihalage/llama3-sinhala)
|
10 |
+
|
11 |
+
LLaMA3 (8B) model instruction finetuned to understand and respond in Sinhala language. `meta-llama/Meta-Llama-3-8B-Instruct` is finetuned on a reletively large dataset in
|
12 |
+
Sinhala compiled by translating English datasets such as ELI5 and Alpaca. The dataset in hosted in Hugging Face Datasets hub [(`sinhala-instruction-finetune-large`)](https://huggingface.co/datasets/ihalage/sinhala-instruction-finetune-large)
|
13 |
+
|
14 |
+
The original model is 4-bit quantized and finetuned with a causal language modelling (CLM) objective by adding LoRA adapters with a rank of 16 and a scaling factor of 32.
|
15 |
+
|
16 |
+
The finetuned `llama3-sinhala` model generates better responses in Sinhala compared to the original instruction finetuned model released by Meta.
|
17 |
+
See the github repo [llama3-sinhala](https://github.com/ihalage/llama3-sinhala) for more details.
|