Banner!

Model Card for Gemma-SEA-LION-v4-27B-IT-GGUF

Last updated: 2025-08-25

SEA-LION is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region.

As of 25 Aug 2025, Gemma-SEA-LION-v4-27B-IT excels at Southeast Asian (SEA) tasks when compared to other open models with fewer than 200 billion parameters and demonstrates performance comparable to that of larger and top closed models. Gemma-SEA-LION-v4-27B-IT was quantized to create Gemma-SEA-LION-v4-27B-IT-GGUF. Gemma-SEA-LION-v4-27B-IT-GGUF can run on a laptop with Ollama with 16GB of memory.

Gemma-SEA-LION-v4-27B-IT-GGUF inherits Gemma 3โ€™s:

  • Large 128K context length

  • Image and text understanding capabilities, including document comprehension, visual Q&A, and image-grounded reasoning

  • Advanced function calling and structured outputs to allow for seamless integration into larger systems

Model Description

SEA-LION stands for Southeast Asian Languages In One Network.

Quantization was performed on Gemma-SEA-LION-v4-27B-IT to produce optimized variants that reduce memory requirements while maintaining model quality. These quantized models support inference on a range of consumer-grade GPUs and are compatible with various inference engines.

For tokenization, the model employs the default tokenizer used in Gemma 3 27B Instruct.

  • Developed by: Products Pillar, AI Singapore
  • Funded by: Singapore NRF
  • Model type: Decoder
  • Context length: 128k tokens
  • Language(s): Burmese, English, Indonesian, Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai and Vietnamese
  • License: Gemma Terms of Use
  • Quantized from model: Gemma-SEA-LION-v4-27B-IT

This repo contains GGUF format models files for aisingapore/Gemma-SEA-LION-v4-27B-IT

Model Weights included in this repository:

Take note that some GGUFs are split into parts. Most tools such as llama.cpp and those built on it do support split GGUFs, pointing the platform to the first split will be sufficient for it to function. In the event where a merge is necessary, it can be done using llama.cpp's gguf-split: ./gguf-split --merge ./path/to/first-split ./path/to/output-gguf More details: gguf-split guide & README

How to Get Started with the Model

Use the code below to get started with the model using llama.cpp

llama.cpp (text-only)

./llama-cli -hf aisingapore/Gemma-SEA-LION-v4-27B-IT-GGUF -p "Write a poem about Dim Sum in Indonesian."

llama.cpp (image input)

wget https://github.com/bebechien/gemma/blob/main/surprise.png?raw=true -O ~/Downloads/surprise.png
./llama-gemma3-cli -hf aisingapore/Gemma-SEA-LION-v4-27B-IT-GGUF -p "Describe this image in Malay." --image ~/Downloads/surprise.png

Training Details

Training Data

The dataset comprises Burmese, English, Indonesian, Khmer, Lao, Malay, Mandarin, Tagalog, Tamil, Thai and Vietnamese languages, collected from a mixture of sources including web data, code, open-source datasets, and synthetically generated datasets, amounting to a total of 500 billion tokens sampled from our bucket of 1 trillion tokens.

Evaluation

Performance Test Results

Quantized Variant Model Size (GB) VRAM Required (GB) Time to First Token (s) Tokens per Second
BF16 54.0 71.1 2.03 41.5
Q8_0 28.7 47 3.179 37
Q4_K_M 16.5 35.5 2.645 59.9

Additional Remarks:

  • TTFT and Tokens per Second: measured with vllm on localhost and concurrency = 1.

  • GGUF served using llama.cpp with the following settings:

    • Offload all layers to GPU, Context Length 128K
  • Reported results are the median (p50) values, calculated across 10 requests.

  • Input size 4K, output 1K

  • Tests conducted on a system with an NVIDIA H100 GPU

Out-of-Scope Use

The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.

Bias, Risks, and Limitations

The model was not tested for robustness against adversarial prompting. It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies.

Limitations

In terms of vision capability, Gemma-SEA-LION-v4-27B-IT has been trained and fine-tuned exclusively on the text back-end. As a result, its vision capabilities are expected to be comparable to those of Gemma 3 IT 27B, and may not exhibit significant improvements or differences in this area. ๐Ÿค— google/gemma-3-27b-it

More Information

This is the repository for the commercial instruction-tuned model. The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.

AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.

For more info, please contact us at sealion@aisingapore.org

Team

Antonyrex Sajeban, Chan Hok Teng Adwin, Cheng Zi Yi Nicholas, Choa Hsueh Mei Esther, Heng Jonathan, Huang Yuli, Hulagadri Adithya Venkatadri, Jann Railey Estrada Montalan, Kang Siow Wei Bryan, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Muhammad Ridzuan Bin Mokhtar, Nagarajan Karthik, Ng Boon Cheong Raymond, Ngee Chia Tai, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Jin Jie Brandon, Ong Tat-Wee David, Ong Zhi Hao, Pereira Mark, Rengarajan Hamsawardhini, Susanto Yosephine, Sutaveephamochanon Anocha, Tan Choon Meng, Tan Chor Phin Evelyn, Tan Siao Wei Jessica, Teng Kok Wai Walter, Teo Eng Sipp Leslie, Tjhi William, Yeo Yeow Tong, Yong Xianbin, Liew Rachel, Liu Bing Jie Darius, Teo Wei Yi, Zhou Lin (NCS), Gopalakrishnan Roshan (NCS), Anda Cuahtemoc (NCS), Sri Devi Wijaya (NCS), Nandi Partha (NCS), Elliott Chris (Google), Mohseni Mohammadreza (Google), Sharan Mayank (Google), Wei Fanny (Google), Tang Jiuqiang (Google), Xu Xiang (Google), Yu Ting (Google), Loh Michelle (Google), Mangal Saurabh (Google), Mukherjee Pratyusha (Google), Sim Stephanie (Google)

Contact

sealion@aisingapore.org

Downloads last month
329
GGUF
Model size
27B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for aisingapore/Gemma-SEA-LION-v4-27B-IT-GGUF

Finetuned
(557)
this model

Collection including aisingapore/Gemma-SEA-LION-v4-27B-IT-GGUF