|
--- |
|
license: apache-2.0 |
|
tags: |
|
- Indonesian |
|
- Chat |
|
- Instruct |
|
language: |
|
- id |
|
- en |
|
base_model: |
|
- meta-llama/Llama-3.2-3B-Instruct |
|
datasets: |
|
- NekoFi/alpaca-gpt4-indonesia-cleaned |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
![image/jpeg](https://huggingface.co/xMaulana/FinMatcha-3B-Instruct/resolve/main/image.jpg) |
|
|
|
# FinMatcha-3B-Instruct |
|
|
|
FinMatcha is a powerful Indonesian-focused large language model (LLM) fine-tuned using the [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) base model. The model has been trained to handle a variety of natural language processing tasks such as text generation, summarization, translation, and question-answering, with a special emphasis on understanding and generating Indonesian text. |
|
|
|
This model has been fine-tuned on a wide array of Indonesian datasets, making it adept at handling the nuances of the Indonesian language, from formal to colloquial speech. It also supports English for bilingual applications. |
|
|
|
## Model Details |
|
|
|
- **Finetuned from model**: [Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) |
|
- **Dataset**: [NekoFi/alpaca-gpt4-indonesia-cleaned](https://huggingface.co/datasets/NekoFi/alpaca-gpt4-indonesia-cleaned) |
|
- **Model Size**: 3B |
|
- **License**: [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) |
|
- **Languages**: Indonesian, English |
|
|
|
## How to use |
|
|
|
### Installation |
|
|
|
To use the Finmatcha model, install the required dependencies: |
|
|
|
```bash |
|
pip install transformers>=4.45 |
|
``` |
|
|
|
### Usage |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_id = "xMaulana/FinMatcha-3B-Instruct" |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_id, |
|
torch_dtype=torch.float16, |
|
device_map="auto" |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
|
inputs = tokenizer("berikan aku resep nasi goreng super lezat", return_tensors="pt").to("cuda") |
|
outputs = model.generate(inputs.input_ids, |
|
max_new_tokens = 1024, |
|
pad_token_id=tokenizer.pad_token_id, |
|
eos_token_id=tokenizer.eos_token_id, |
|
temperature=0.7, |
|
do_sample=True, |
|
top_k=5, |
|
top_p=0.9, |
|
repetition_penalty=1.1 |
|
) |
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
``` |
|
|
|
## Limitations |
|
|
|
- The model is primarily focused on the Indonesian language and may not perform as well on non-Indonesian tasks. |
|
- As with all LLMs, cultural and contextual biases can be present. |
|
|
|
## License |
|
|
|
The model is licensed under the [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0). |
|
|
|
## Contributing |
|
|
|
We welcome contributions to enhance and improve Finmatcha. Feel free to open issues or submit pull requests for improvements. |