Safetensors
llama
falcon3
4-bit precision
gptq
slimfrikha-tii commited on
Commit
e8191f3
0 Parent(s):

falcon3 release

Browse files
.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - es
6
+ - pt
7
+ tags:
8
+ - falcon3
9
+ base_model: tiiuae/Falcon3-3B-Instruct
10
+ license: other
11
+ license_name: falcon-llm-license
12
+ license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
13
+ ---
14
+
15
+ <div align="center">
16
+ <img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/general/falco3-logo.png" alt="drawing" width="500"/>
17
+ </div>
18
+
19
+ # Falcon3-3B-Instruct-GPTQ-Int4
20
+
21
+ **Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
22
+
23
+ **Falcon3-3B-Instruct** achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks.
24
+ Falcon3-3B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.
25
+
26
+ This repository contains the GPTQ-quantized 4-bit instruction-tuned 3B Falcon3 model.
27
+
28
+ ## Model Details
29
+ - Architecture
30
+ - Transformer-based causal decoder-only architecture
31
+ - 22 decoder blocks
32
+ - Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads
33
+ - Wider head dimension: 256
34
+ - High RoPE value to support long context understanding: 1000042
35
+ - Uses SwiGLU and RMSNorm
36
+ - 32K context length
37
+ - 131K vocab size
38
+ - Pruned and healed from Falcon3-7B-Base on only 100 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
39
+ - Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
40
+ - Supports EN, FR, ES, PT
41
+ - Developed by [Technology Innovation Institute](https://www.tii.ae)
42
+ - License: TII Falcon-LLM License 2.0
43
+ - Model Release Date: December 2024
44
+ - Quantization: GPTQ 4-bit
45
+
46
+
47
+ ## Getting started
48
+
49
+ <details>
50
+ <summary> Click to expand </summary>
51
+
52
+ ```python
53
+ from transformers import AutoTokenizer, AutoModelForCausalLM
54
+
55
+
56
+ model_name = "tiiuae/Falcon3-3B-Instruct-GPTQ-Int4"
57
+
58
+ model = AutoModelForCausalLM.from_pretrained(
59
+ model_name,
60
+ torch_dtype="auto",
61
+ device_map="auto"
62
+ )
63
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
64
+
65
+ prompt = "How many hours in one day?"
66
+ messages = [
67
+ {"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
68
+ {"role": "user", "content": prompt}
69
+ ]
70
+ text = tokenizer.apply_chat_template(
71
+ messages,
72
+ tokenize=False,
73
+ add_generation_prompt=True
74
+ )
75
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
76
+
77
+ generated_ids = model.generate(
78
+ **model_inputs,
79
+ max_new_tokens=1024
80
+ )
81
+ generated_ids = [
82
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
83
+ ]
84
+
85
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
86
+ print(response)
87
+ ```
88
+
89
+ </details>
90
+
91
+ <br>
92
+
93
+ ## Benchmarks
94
+ We report in the following table our internal pipeline benchmarks:
95
+
96
+ <table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
97
+ <colgroup>
98
+ <col style="width: 10%;">
99
+ <col style="width: 10%;">
100
+ <col style="width: 10%;">
101
+ <col style="width: 10%;">
102
+ <col style="background-color: rgba(80, 15, 213, 0.5); width: 7%;">
103
+ </colgroup>
104
+ <thead>
105
+ <tr>
106
+ <th>Benchmark</th>
107
+ <th>Falcon3-3B-Instruct</th>
108
+ <th>Falcon3-3B-Instruct-GPTQ-Int8</th>
109
+ <th>Falcon3-3B-Instruct-AWQ</th>
110
+ <th>Falcon3-3B-Instruct-GPTQ-Int4</th>
111
+ </tr>
112
+ </thead>
113
+ <tbody>
114
+ <tr>
115
+ <td>MMLU</td>
116
+ <td>55.70</td>
117
+ <td>55.79</td>
118
+ <td>53.30</td>
119
+ <td>53.25</td>
120
+ </tr>
121
+ <tr>
122
+ <td>MMLU-PRO</td>
123
+ <td>30.00</td>
124
+ <td>30.27</td>
125
+ <td>28.37</td>
126
+ <td>25.88</td>
127
+ </tr>
128
+ <tr>
129
+ <td>IFEval</td>
130
+ <td>69.07</td>
131
+ <td>68.34</td>
132
+ <td>67.85</td>
133
+ <td>62.83</td>
134
+ </tr>
135
+ </tbody>
136
+ </table>
137
+
138
+ ## Useful links
139
+ - View our [release blogpost](https://huggingface.co/blog/falcon3).
140
+ - Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers.
141
+
142
+ ## Technical Report
143
+ Coming soon....
144
+
145
+ ## Citation
146
+ If the Falcon3 family of models were helpful to your work, feel free to give us a cite.
147
+
148
+ ```
149
+ @misc{Falcon3,
150
+ title = {The Falcon 3 Family of Open Models},
151
+ url = {https://huggingface.co/blog/falcon3},
152
+ author = {Falcon-LLM Team},
153
+ month = {December},
154
+ year = {2024}
155
+ }
156
+ ```
config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_attn_implementation_autoset": true,
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "eos_token_id": 11,
9
+ "head_dim": 256,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 3072,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 9216,
14
+ "max_position_embeddings": 32768,
15
+ "mlp_bias": false,
16
+ "model_type": "llama",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 22,
19
+ "num_key_value_heads": 4,
20
+ "pretraining_tp": 1,
21
+ "quantization_config": {
22
+ "bits": 4,
23
+ "checkpoint_format": "gptq",
24
+ "damp_percent": 0.01,
25
+ "desc_act": false,
26
+ "group_size": 128,
27
+ "model_file_base_name": "model",
28
+ "model_name_or_path": null,
29
+ "quant_method": "gptq",
30
+ "static_groups": false,
31
+ "sym": true,
32
+ "true_sequential": true
33
+ },
34
+ "rms_norm_eps": 1e-06,
35
+ "rope_scaling": null,
36
+ "rope_theta": 1000042,
37
+ "tie_word_embeddings": false,
38
+ "torch_dtype": "float16",
39
+ "transformers_version": "4.47.0",
40
+ "use_cache": true,
41
+ "vocab_size": 131072
42
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3896ad4aca3a16241bbfe5f6f923e06be3436dd728ff501dccd46b9d0566274b
3
+ size 2871810264
quantize_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bits": 4,
3
+ "group_size": 128,
4
+ "damp_percent": 0.01,
5
+ "desc_act": false,
6
+ "static_groups": false,
7
+ "sym": true,
8
+ "true_sequential": true,
9
+ "model_name_or_path": null,
10
+ "model_file_base_name": "model",
11
+ "quant_method": "gptq",
12
+ "checkpoint_format": "gptq"
13
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ ">>TITLE<<",
4
+ ">>ABSTRACT<<",
5
+ ">>INTRODUCTION<<",
6
+ ">>SUMMARY<<",
7
+ ">>COMMENT<<",
8
+ ">>ANSWER<<",
9
+ ">>QUESTION<<",
10
+ ">>DOMAIN<<",
11
+ ">>EMAIL_ADDRESS<<",
12
+ ">>IP_ADDRESS<<",
13
+ "<|startoftext|>",
14
+ ">>IP_ADDRESS_0<<",
15
+ ">>IP_ADDRESS_1<<",
16
+ ">>IP_ADDRESS_2<<",
17
+ ">>IP_ADDRESS_3<<",
18
+ ">>IP_ADDRESS_4<<",
19
+ ">>IP_ADDRESS_5<<",
20
+ ">>IP_ADDRESS_6<<",
21
+ ">>IP_ADDRESS_7<<",
22
+ ">>IP_ADDRESS_8<<",
23
+ ">>IP_ADDRESS_9<<",
24
+ ">>PASSWORD<<",
25
+ ">>KEY<<"
26
+ ],
27
+ "eos_token": {
28
+ "content": "<|endoftext|>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ },
34
+ "pad_token": {
35
+ "content": "<|pad|>",
36
+ "lstrip": false,
37
+ "normalized": false,
38
+ "rstrip": false,
39
+ "single_word": false
40
+ }
41
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff