atsuki-yamaguchi
commited on
Commit
•
9e789e6
1
Parent(s):
580e3ff
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ language:
|
|
6 |
base_model: meta-llama/Meta-Llama-3-8B
|
7 |
library_name: transformers
|
8 |
---
|
9 |
-
# Llama3 8B for Sinhala: 5000 target vocabulary size + Mean target vocabulary initialization +
|
10 |
|
11 |
This model is built on top of Llama3 8B adapted for Sinhala using 30K target language sentences sampled from CC-100.
|
12 |
|
@@ -14,7 +14,7 @@ This model is built on top of Llama3 8B adapted for Sinhala using 30K target lan
|
|
14 |
|
15 |
* **Vocabulary**: This model has an additional 5000 target vocabulary.
|
16 |
* **Target vocabulary initialization**: The target weights of the embedding and LM head were initialized using Mean initialization.
|
17 |
-
* **Training**: This model was additionally pre-trained on 30K target language sentences sampled from CC-100. The training was conducted with the
|
18 |
|
19 |
## Model Description
|
20 |
|
|
|
6 |
base_model: meta-llama/Meta-Llama-3-8B
|
7 |
library_name: transformers
|
8 |
---
|
9 |
+
# Llama3 8B for Sinhala: 5000 target vocabulary size + Mean target vocabulary initialization + 2x2LS/MTP/512 training
|
10 |
|
11 |
This model is built on top of Llama3 8B adapted for Sinhala using 30K target language sentences sampled from CC-100.
|
12 |
|
|
|
14 |
|
15 |
* **Vocabulary**: This model has an additional 5000 target vocabulary.
|
16 |
* **Target vocabulary initialization**: The target weights of the embedding and LM head were initialized using Mean initialization.
|
17 |
+
* **Training**: This model was additionally pre-trained on 30K target language sentences sampled from CC-100. The training was conducted with the 2x2LS/MTP/512 strategies introduced in the paper.
|
18 |
|
19 |
## Model Description
|
20 |
|