Quantized Google Gemma-3-270M Model

This repository provides the Gemma-3-270M model, one of Google’s lightweight and efficient open models from the Gemma family. With only 270 million parameters, it is designed for fast inference, prototyping, and edge deployment, while retaining strong general-purpose reasoning, text understanding, and instruction-following abilities.

Model Overview

  • Original Model: Gemma-3-270M
  • Architecture: Decoder-only Transformer
  • Base Model: Gemma-3 series
  • Modalities: Text only
  • Developer: Google DeepMind
  • License: Gemma Terms of Use
  • Language: English

Quantization Details

Q4_K_M Version

  • Approx. ~54% size reduction
  • Lower memory footprint (~241 MB)
  • Slight performance degradation in complex reasoning scenarios

Q5_K_M Version

  • Approx. ~52% size reduction
  • Higher fidelity (~247 MB)
  • Better performance retention, recommended when quality is a priority

Key Features

  • Extremely lightweight (270M parameters) – easy to run on CPU, edge devices, or low-resource GPUs
  • General-purpose instruction following
  • Useful for text completion, reasoning, and lightweight assistant prototypes
  • Designed to be fast and memory efficient while maintaining quality
  • Good foundation for research, experimentation, and fine-tuning

llama.cpp (text-only)

./llama-cli -hf SandLogicTechnologies/gemma-3-270m_Q4_k_m -p "What is LLM Quantization"

Usage

This model is intended for developers and researchers who need a lightweight LLM for prototyping and small-scale applications.

Acknowledgments

These quantized models are based on the original work by Google development team.

Special thanks to:

  • The Google team for developing and releasing the gemma-3-270m model.

  • Georgi Gerganov and the entire llama.cpp open-source community for enabling efficient model quantization and inference via the GGUF format.


Contact

For any inquiries or support, please contact us at support@sandlogic.com or visit our Website.

Downloads last month
27
GGUF
Model size
0.3B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SandLogicTechnologies/Gemma-3-270m-GGUF

Quantized
(30)
this model