---
base_model: tiiuae/Falcon3-7B-Instruct
language:
- en
- fr
- es
- pt
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
tags:
- falcon3
---
# Falcon3-7B-Instruct-GPTQ-Int8
**Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
**Falcon3-7B-Instruct** achieves state-of-the-art results (at release's time) on reasoning, language understanding, instruction following, code and mathematics tasks.
Falcon3-7B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.
This repository contains the GPTQ-quantized 8-bit instruction-tuned 7B Falcon3 model.
## Model Details
- Architecture
- Transformer-based causal decoder-only architecture
- 28 decoder blocks
- Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads
- Wider head dimension: 256
- High RoPE value to support long context understanding: 1000042
- Uses SwiGLU and RMSNorm
- 32K context length
- 131K vocab size
- Pretrained on 14 Teratokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
- Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
- Supports EN, FR, ES, PT
- Developed by [Technology Innovation Institute](https://www.tii.ae)
- License: TII Falcon-LLM License 2.0
- Model Release Date: December 2024
- Quantization: GPTQ 8-bit
## Getting started
Click to expand
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tiiuae/Falcon3-7B-Instruct-GPTQ-Int8"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many hours in one day?"
messages = [
{"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
# Benchmarks
We report in the following table our internal pipeline benchmarks:
Benchmark | Falcon3-7B-Instruct | Falcon3-7B-Instruct-GPTQ-Int4 | Falcon3-7B-Instruct-AWQ | Falcon3-7B-Instruct-GPTQ-Int8 |
---|---|---|---|---|
MMLU | 67.7 | 65.6 | 66.4 | 67.6 |
MMLU-PRO | 40.9 | 39.1 | 39.9 | 40.9 |
IFEval | 75.1 | 72.2 | 74.8 | 77.0 |