Gemma-2B-it GGUF Quantized

Usage

This model can be used with the latest version of llama.cpp and LM Studio >0.2.16.

Downloads last month
6
GGUF
Model size
2.51B params
Architecture
gemma

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.