Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
datasets:
|
4 |
+
- cerebras/SlimPajama-627B
|
5 |
+
- uonlp/CulturaX
|
6 |
+
- pg19
|
7 |
+
- bigcode/starcoderdata
|
8 |
+
- croissantllm/CroissantLLM-2201-sft
|
9 |
+
language:
|
10 |
+
- fr
|
11 |
+
- en
|
12 |
+
pipeline_tag: text2text-generation
|
13 |
+
tags:
|
14 |
+
- legal
|
15 |
+
- code
|
16 |
+
- text-generation-inference
|
17 |
+
- art
|
18 |
+
---
|
19 |
+
|
20 |
+
# CroissantLLMChat (190k steps + Chat)
|
21 |
+
|
22 |
+
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 190k steps (2.99 T) tokens and a final Chat finetuing phase.
|
23 |
+
|
24 |
+
For best performance, it should be used along with the exact template described below:
|
25 |
+
|
26 |
+
```python
|
27 |
+
CHAT = """<|im_start|>user
|
28 |
+
{USER QUERY}<|im_end|>
|
29 |
+
<|im_start|>assistant\n"""
|
30 |
+
```
|
31 |
+
|
32 |
+
|
33 |
+
## Abstract
|
34 |
+
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
|
35 |
+
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
|
36 |
+
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
|
37 |
+
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
|
38 |
+
|
39 |
+
## Citation
|
40 |
+
|
41 |
+
Our work can be cited as:
|
42 |
+
|
43 |
+
```bash
|
44 |
+
Coming soon
|
45 |
+
```
|
46 |
+
|
47 |
+
## Usage
|
48 |
+
|
49 |
+
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies. It's cooldown phase however enables it to function quite well without few-shots as well.
|
50 |
+
|
51 |
+
|
52 |
+
```python
|
53 |
+
|
54 |
+
import torch
|
55 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
56 |
+
|
57 |
+
|
58 |
+
model_name = "croissantllm/CroissantLLMChat-v0.1"
|
59 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
60 |
+
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
|
61 |
+
|
62 |
+
CHAT = """<|im_start|>user
|
63 |
+
Que puis-je faire à Marseille?<|im_end|>
|
64 |
+
<|im_start|>assistant\n"""
|
65 |
+
|
66 |
+
inputs = tokenizer(CHAT, return_tensors="pt", add_special_tokens=True).to(model.device)
|
67 |
+
tokens = model.generate(**inputs, max_new_tokens=150, do_sample=True, top_p=0.95, top_k=60, temperature=0.5)
|
68 |
+
print(tokenizer.decode(tokens[0]))
|
69 |
+
|
70 |
+
```
|