--- language: - en - fr - es - pt tags: - falcon3 base_model: tiiuae/Falcon3-1B-Instruct license: other license_name: falcon-llm-license license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html ---
drawing
# Falcon3-1B-Instruct-AWQ **Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters. **Falcon3-1B-Instruct** achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks. Falcon3-1B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 8K. ## Model Details - Architecture - Transformer-based causal decoder-only architecture - 18 decoder blocks - Grouped Query Attention (GQA) for faster inference: 8 query heads and 4 key-value heads - Wider head dimension: 256 - High RoPE value to support long context understanding: 1000042 - Uses SwiGLU and RMSNorm - 8K context length - 131K vocab size - Pruned and healed using larger Falcon models (3B and 7B respectively) on only 80 Gigatokens of datasets comprising of web, code, STEM, high quality and multilingual data using 256 H100 GPU chips - Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data - Supports EN, FR, ES, PT - Developed by [Technology Innovation Institute](https://www.tii.ae) - License: TII Falcon-LLM License 2.0 - Model Release Date: December 2024 - Quantization: AWQ 4-bit ## Getting started
Click to expand ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "tiiuae/Falcon3-1B-Instruct-AWQ" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "How many hours in one day?" messages = [ {"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=1024 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ```

# Benchmarks We report in the following table our internal pipeline benchmarks:
Benchmark Falcon 3-1B Instruct Falcon 3-1B Instruct-GPTQ-Int4 Falcon 3-1B Instruct-GPTQ-Int8 Falcon 3-1B Instruct-AWQ
MMLU 43.54 42.59 43.44 42.91
MMLU-PRO 18.48 17.68 18.43 17.28
IFEval 54.83 51.33 56.05 51.12
## Useful links - View our [release blogpost](https://huggingface.co/blog/falcon3). - Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers. ## Technical Report Coming soon.... ## Citation If the Falcon3 family of models were helpful to your work, feel free to give us a cite. ``` @misc{Falcon3, title = {The Falcon 3 Family of Open Models}, url = {https://huggingface.co/blog/falcon3}, author = {Falcon-LLM Team}, month = {December}, year = {2024} } ```