slimfrikha-tii
commited on
Commit
•
f192894
0
Parent(s):
falcon3 release
Browse files- .gitattributes +44 -0
- Falcon3-1B-Instruct-f16.gguf +3 -0
- Falcon3-1B-Instruct-q2_k.gguf +3 -0
- Falcon3-1B-Instruct-q3_k_m.gguf +3 -0
- Falcon3-1B-Instruct-q4_0.gguf +3 -0
- Falcon3-1B-Instruct-q4_k_m.gguf +3 -0
- Falcon3-1B-Instruct-q5_0.gguf +3 -0
- Falcon3-1B-Instruct-q5_k_m.gguf +3 -0
- Falcon3-1B-Instruct-q6_k.gguf +3 -0
- Falcon3-1B-Instruct-q8_0.gguf +3 -0
- README.md +126 -0
.gitattributes
ADDED
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Falcon3-1B-Instruct-f16.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
Falcon3-1B-Instruct-q2_k.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
Falcon3-1B-Instruct-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
Falcon3-1B-Instruct-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
Falcon3-1B-Instruct-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
Falcon3-1B-Instruct-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
Falcon3-1B-Instruct-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
Falcon3-1B-Instruct-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
Falcon3-1B-Instruct-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
Falcon3-1B-Instruct-f16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:32d5a0848ace5a1faebfe517744e8b505de1952210d951f93e39e93118e1b5c7
|
3 |
+
size 3343708192
|
Falcon3-1B-Instruct-q2_k.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f22c6dfcd910cd4beed538254acfd597c9285d9c3adb9944d34f6a012f73dc2e
|
3 |
+
size 727085088
|
Falcon3-1B-Instruct-q3_k_m.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:129f95cb6c7c146c05fb3aa6fca855fcc51ebc3e571ad7820ff69ad9db0e0e8c
|
3 |
+
size 884961312
|
Falcon3-1B-Instruct-q4_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4fc26bf2f14fb7363007623362c9a935f90ef45ca9f4a38b49d19faebdba4220
|
3 |
+
size 1013248032
|
Falcon3-1B-Instruct-q4_k_m.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c75ba88610fb333f16ab19e8ef591a63f3f03d1e4beffd0d2a978a851401d32f
|
3 |
+
size 1057042464
|
Falcon3-1B-Instruct-q5_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3234db54a672421f4b7aa4b312e42a14f4e056e247d306104373fb6d774351c6
|
3 |
+
size 1188360224
|
Falcon3-1B-Instruct-q5_k_m.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cb6f4e0d16b12d8482679b9581c53e5773f19b6a11d036f2bc117d44f93c6fad
|
3 |
+
size 1210920992
|
Falcon3-1B-Instruct-q6_k.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0c10e86a1e256387b51268e8a5e963513e29af3884eee3094f5422c445c9883a
|
3 |
+
size 1374416928
|
Falcon3-1B-Instruct-q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:80e80a4c818408e554748d7792ce9768e33e4cf2df9d9b2fa61f155118de467d
|
3 |
+
size 1778708512
|
README.md
ADDED
@@ -0,0 +1,126 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- fr
|
6 |
+
- es
|
7 |
+
- pt
|
8 |
+
base_model:
|
9 |
+
- tiiuae/Falcon3-1B-Instruct
|
10 |
+
pipeline_tag: text-generation
|
11 |
+
library_name: transformers
|
12 |
+
tags:
|
13 |
+
- falcon3
|
14 |
+
|
15 |
+
---
|
16 |
+
|
17 |
+
<div align="center">
|
18 |
+
<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/general/falco3-logo.png" alt="drawing" width="500"/>
|
19 |
+
</div>
|
20 |
+
|
21 |
+
# Falcon3-1B-Instruct-GGUF
|
22 |
+
|
23 |
+
|
24 |
+
**Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
|
25 |
+
|
26 |
+
**Falcon3-1B-Instruct** achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks.
|
27 |
+
Falcon3-1B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 8K.
|
28 |
+
|
29 |
+
This repository contains the GGUFs instruction-tuned 1B Falcon3 model.
|
30 |
+
|
31 |
+
## Model Details
|
32 |
+
- Architecture
|
33 |
+
- Transformer-based causal decoder-only architecture
|
34 |
+
- 18 decoder blocks
|
35 |
+
- Grouped Query Attention (GQA) for faster inference: 8 query heads and 4 key-value heads
|
36 |
+
- Wider head dimension: 256
|
37 |
+
- High RoPE value to support long context understanding: 1000042
|
38 |
+
- Uses SwiGLU and RMSNorm
|
39 |
+
- 8K context length
|
40 |
+
- 131K vocab size
|
41 |
+
- Pruned and healed using larger Falcon models (3B and 7B respectively) on only 80 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 256 H100 GPU chips
|
42 |
+
- Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
|
43 |
+
- Supports EN, FR, ES, PT
|
44 |
+
- Developed by [Technology Innovation Institute](https://www.tii.ae)
|
45 |
+
- License: TII Falcon-LLM License 2.0
|
46 |
+
- Model Release Date: December 2024
|
47 |
+
- Quantization: q2_K, q3_K_M, q4_0, q4_K_M, q5_0, q5_K_M, q6_K, q8_0
|
48 |
+
|
49 |
+
|
50 |
+
## Getting started
|
51 |
+
|
52 |
+
### 1. Download GGUF models from hugging face
|
53 |
+
|
54 |
+
First, download the model from Hugging Face. You can use the `huggingface_hub` library or download it manually:
|
55 |
+
|
56 |
+
|
57 |
+
|
58 |
+
```bash
|
59 |
+
pip install huggingface_hub
|
60 |
+
huggingface-cli download {model_name}
|
61 |
+
```
|
62 |
+
|
63 |
+
|
64 |
+
|
65 |
+
This will download the model to your current directory. Make sure to replace {model_name} with the actual username and model name from your Hugging Face repository.
|
66 |
+
|
67 |
+
|
68 |
+
|
69 |
+
## 2. Install llama.cpp
|
70 |
+
|
71 |
+
You have several options for installing llama.cpp:
|
72 |
+
|
73 |
+
**1. Build from source:**
|
74 |
+
|
75 |
+
This gives you the most flexibility and control. Follow the instructions in the llama.cpp repository to build from source:
|
76 |
+
|
77 |
+
```bash
|
78 |
+
|
79 |
+
git clone https://github.com/ggerganov/llama.cpp
|
80 |
+
cd llama.cpp
|
81 |
+
cmake -B build
|
82 |
+
cmake --build build --config Release
|
83 |
+
```
|
84 |
+
|
85 |
+
For more information about how to build llama.cpp from source please refere to llama.cpp documentation on how to build from source: **[llama.cpp build from source](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)**.
|
86 |
+
|
87 |
+
**2. Download pre-built binaries:**
|
88 |
+
|
89 |
+
If you prefer a quicker setup, you can download pre-built binaries for your operating system. Check the llama.cpp repository for available binaries.
|
90 |
+
|
91 |
+
**3. Use Docker:**
|
92 |
+
|
93 |
+
For a more contained environment, you can use the official llama.cpp Docker image. Refer to the llama.cpp documentation for instructions on how to use the Docker image.
|
94 |
+
|
95 |
+
For detailed instructions and more information, please check the llama.cpp documentation on docker: **[llama.cpp docker](https://github.com/ggerganov/llama.cpp/blob/master/docs/docker.mdg)**.
|
96 |
+
|
97 |
+
### 3. Start playing with your model
|
98 |
+
|
99 |
+
Run simple text completion
|
100 |
+
```bash
|
101 |
+
llama-cli -m {path-to-gguf-model} -p "I believe the meaning of life is" -n 128
|
102 |
+
```
|
103 |
+
|
104 |
+
Run in conversation mode
|
105 |
+
```bash
|
106 |
+
llama-cli -m {path-to-gguf-model} -p "You are a helpful assistant" -cnv -co
|
107 |
+
```
|
108 |
+
## Useful links
|
109 |
+
- View our [release blogpost](https://huggingface.co/blog/falcon3).
|
110 |
+
- Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers.
|
111 |
+
|
112 |
+
## Technical Report
|
113 |
+
Coming soon....
|
114 |
+
|
115 |
+
## Citation
|
116 |
+
If the Falcon3 family of models were helpful to your work, feel free to give us a cite.
|
117 |
+
|
118 |
+
```
|
119 |
+
@misc{Falcon3,
|
120 |
+
title = {The Falcon 3 Family of Open Models},
|
121 |
+
url = {https://huggingface.co/blog/falcon3},
|
122 |
+
author = {Falcon-LLM Team},
|
123 |
+
month = {December},
|
124 |
+
year = {2024}
|
125 |
+
}
|
126 |
+
```
|