File size: 1,265 Bytes
38b44c6
 
 
ab3205e
38b44c6
 
 
 
 
ab3205e
38b44c6
ab3205e
 
38b44c6
ab3205e
 
 
 
38b44c6
ab3205e
 
a615ae5
ab3205e
a615ae5
ab3205e
38b44c6
ab3205e
 
 
 
 
 
 
38b44c6
 
 
 
ab3205e
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
language:
- en
license: llama3
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
datasets:
- 922-Narra/tagaloguanaco_cleaned_03152024
---
# Llama-3-8b-tagalog-v1:
* Test model fine-tuned on [this dataset](https://huggingface.co/datasets/922-Narra/tagaloguanaco_cleaned_03152024)
* Base: LLaMA-3 8b
* [GGUFs](https://huggingface.co/922-Narra/Llama-3-8b-tagalog-v1)

### USAGE
This is meant to be mainly a chat model.

Use "Human" and "Assistant" and prompt with Tagalog:

"\nHuman: INPUT\nAssistant:"

### HYPERPARAMS
* Trained for 1 epochs
* rank: 32
* lora alpha: 32
* lr: 2e-4
* batch size: 2
* grad steps: 4

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)

### WARNINGS AND DISCLAIMERS
Note that there is a chance that the model may switch back to English (albeit still understand Tagalog inputs) or output clunky results. 

Finally, this model is not guaranteed to output aligned or safe outputs nor is it meant for production use - use at your own risk!