Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,57 @@
|
|
1 |
-
---
|
2 |
-
license:
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- teknium/OpenHermes-2.5
|
5 |
+
- CohereForAI/aya_dataset
|
6 |
+
- jondurbin/airoboros-3.2
|
7 |
+
- m-a-p/COIG-CQIA
|
8 |
+
- hfl/ruozhiba_gpt4
|
9 |
+
- hkust-nlp/dart-math-hard
|
10 |
+
- ise-uiuc/Magicoder-Evol-Instruct-110K
|
11 |
+
---
|
12 |
+
# Model Card for FuxiTranyu-8B-SFT
|
13 |
+
|
14 |
+
## Model Summary
|
15 |
+
|
16 |
+
FuxiTranyu-8B is a completely open source large language model trained from scratch, with a specific focus on the multilinguality.
|
17 |
+
FuxiTranyu-8B was trained on 600B tokens with a much more smooth distribution across languages.
|
18 |
+
|
19 |
+
We cover 43 natural languages: Arabic, Bengali, Bulgarian, Burmese, Catalan, Chinese, Czech, Dutch, English, Filipino, Finnish, French, German, Greek, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Malay, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Swedish, Tamil, Tajik, Thai, Turkish, Turkmen, Ukrainian, Urdu, Uzbek, and Vietnamese.
|
20 |
+
And also cover 16 programming languages: Java, JavaScript, Python, PHP, C, C++, C#, TypeScript, Go, SQL, Rust, Ruby, Scala, Lua, Assembly, and Visual Basic.
|
21 |
+
|
22 |
+
FuxiTranyu-8B-SFT is an instruct fine-tuned version of [FuxiTranyu-8B](https://huggingface.co/TJUNLP/FuxiTranyu-8B) model.
|
23 |
+
|
24 |
+
More details can be found at our technical report.
|
25 |
+
|
26 |
+
## Usage
|
27 |
+
```python
|
28 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
29 |
+
|
30 |
+
model_path = "TJUNLP/FuxiTranyu-8B-DPO"
|
31 |
+
|
32 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
33 |
+
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", torch_dtype='auto', trust_remote_code=True)
|
34 |
+
|
35 |
+
messages = [{"role": "user", "content": "This is an input text:"}]
|
36 |
+
# format messages with the ChatML chat template
|
37 |
+
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
|
38 |
+
# <|im_start|>user\nThis is an input text:<|im_end|>\n<|im_start|>assistant\n
|
39 |
+
|
40 |
+
output_ids = model.generate(input_ids, max_new_tokens=20)
|
41 |
+
response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
|
42 |
+
|
43 |
+
print(response)
|
44 |
+
```
|
45 |
+
|
46 |
+
## Citation info
|
47 |
+
|
48 |
+
```bibtex
|
49 |
+
@misc{FuxiTranyu8B,
|
50 |
+
title={FuxiTranyu: A Multilingual Large Language Model Trained with Balanced Data},
|
51 |
+
author={Haoran Sun, Renren Jin, Shaoyang Xu, Leiyu Pan, Supryadi, Menglong Cui, Jiangcun Du, Yikun Lei, Lei Yang, Ling Shi, Juesi Xiao, Shaolin Zhu, and Deyi Xiong},
|
52 |
+
year={2024},
|
53 |
+
eprint={2408},
|
54 |
+
archivePrefix={arXiv},
|
55 |
+
primaryClass={cs.CL}
|
56 |
+
}
|
57 |
+
```
|