|
|
|
--- |
|
license: llama2 |
|
language: |
|
- si |
|
base_model: meta-llama/Llama-2-7b-hf |
|
library_name: transformers |
|
--- |
|
# Llama2 7B for Sinhala: 100 target vocabulary size + Align target vocabulary initialization + 2 Stage training |
|
|
|
This model is built on top of Llama2 7B adapted for Sinhala using 30K target language sentences sampled from CC-100. |
|
|
|
## Model Details |
|
|
|
* **Vocabulary**: This model has an additional 100 target vocabulary. |
|
* **Target vocabulary initialization**: The target weights of the embedding and LM head were initialized using Align initialization. |
|
* **Training**: This model was additionally pre-trained on 30K target language sentences sampled from CC-100. The training was conducted with the 2 Stage strategies introduced in the paper. |
|
|
|
## Model Description |
|
|
|
- **Language:** Sinhala |
|
- **License:** Llama 2 Community License Agreement |
|
- **Fine-tuned from model:** meta-llama/Llama-2-7b-hf |
|
|
|
|
|
## Model Sources |
|
|
|
- **Repository:** https://github.com/gucci-j/lowres-cve |
|
- **Paper:** https://arxiv.org/abs/2406.11477 |
|
|
|
## How to Get Started with the Model |
|
Use the code below to get started with the model. |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
"atsuki-yamaguchi/Llama-2-7b-hf-si-30K-align-2stage" |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained( |
|
"atsuki-yamaguchi/Llama-2-7b-hf-si-30K-align-2stage" |
|
) |
|
``` |
|
|
|
|
|
|