metadata
base_model: tiiuae/Falcon3-3B-Instruct
language:
- en
- fr
- es
- pt
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
tags:
- falcon3
Falcon3-3B-Instruct-AWQ
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
Falcon3-3B-Instruct achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks. Falcon3-3B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.
This repository contains the AWQ-quantized 4-bit instruction-tuned 3B Falcon3 model.
Model Details
- Architecture
- Transformer-based causal decoder-only architecture
- 22 decoder blocks
- Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads
- Wider head dimension: 256
- High RoPE value to support long context understanding: 1000042
- Uses SwiGLU and RMSNorm
- 32K context length
- 131K vocab size
- Pruned and healed from Falcon3-7B-Base on only 100 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
- Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
- Supports EN, FR, ES, PT
- Developed by Technology Innovation Institute
- License: TII Falcon-LLM License 2.0
- Model Release Date: December 2024
- Quantization: AWQ 4-bit
Getting started
Click to expand
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "tiiuae/Falcon3-3B-Instruct-AWQ"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many hours in one day?"
messages = [
{"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Benchmarks
We report in the following table our internal pipeline benchmarks:
Benchmark | Falcon 3-3B Instruct | Falcon 3-3B Instruct-GPTQ-Int4 | Falcon 3-3B Instruct-GPTQ-Int8 | Falcon 3-3B Instruct-AWQ |
---|---|---|---|---|
MMLU | 55.7 | 53.3 | 55.8 | 53.3 |
MMLU-PRO | 30.0 | 25.9 | 30.3 | 28.4 |
IFEval | 69.1 | 62.9 | 68.4 | 67.9 |
Useful links
- View our release blogpost.
- Feel free to join our discord server if you have any questions or to interact with our researchers and developers.
Technical Report
Coming soon....
Citation
If the Falcon3 family of models were helpful to your work, feel free to give us a cite.
@misc{Falcon3,
title = {The Falcon 3 Family of Open Models},
url = {https://huggingface.co/blog/falcon3},
author = {Falcon-LLM Team},
month = {December},
year = {2024}
}