Gemma 7B GGUF
Original model: gemma-7b
Model creator: google
This repo contains GGUF format model files for Google’s Gemma-7B.
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.
Learn more on Google’s Model page.
What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Converted using llama.cpp build 2226 (revision eccd7a2)
Download & run with cnvrs on iPhone, iPad, and Mac!
cnvrs is the best app for private, local AI on your device:
- create & save Characters with custom system prompts & temperature settings
- download and experiment with any GGUF model you can find on HuggingFace!
- make it your own with custom Theme colors
- powered by Metal ⚡️ & Llama.cpp, with haptics during response streaming!
- try it out yourself today, on Testflight!
- follow cnvrs on twitter to stay up to date
Original Model Evaluation
Benchmark | Metric | 2B Params | 7B Params |
---|---|---|---|
MMLU | 5-shot, top-1 | 42.3 | 64.3 |
HellaSwag | 0-shot | 71.4 | 81.2 |
PIQA | 0-shot | 77.3 | 81.2 |
SocialIQA | 0-shot | 59.7 | 51.8 |
BooIQ | 0-shot | 69.4 | 83.2 |
WinoGrande | partial score | 65.4 | 72.3 |
CommonsenseQA | 7-shot | 65.3 | 71.3 |
OpenBookQA | 47.8 | 52.8 | |
ARC-e | 73.2 | 81.5 | |
ARC-c | 42.1 | 53.2 | |
TriviaQA | 5-shot | 53.2 | 63.4 |
Natural Questions | 5-shot | - | 23 |
HumanEval | pass@1 | 22.0 | 32.3 |
MBPP | 3-shot | 29.2 | 44.4 |
GSM8K | maj@1 | 17.7 | 46.4 |
MATH | 4-shot | 11.8 | 24.3 |
AGIEval | 24.2 | 41.7 | |
BIG-Bench | 35.2 | 55.1 | |
Average | 54.0 | 56.4 |
Benchmark | Metric | 2B Params | 7B Params |
---|---|---|---|
RealToxicity | average | 6.86 | 7.90 |
BOLD | 45.57 | 49.08 | |
CrowS-Pairs | top-1 | 45.82 | 51.33 |
BBQ Ambig | 1-shot, top-1 | 62.58 | 92.54 |
BBQ Disambig | top-1 | 54.62 | 71.99 |
Winogender | top-1 | 51.25 | 54.17 |
TruthfulQA | 44.84 | 31.81 | |
Winobias 1_2 | 56.12 | 59.09 | |
Winobias 2_2 | 91.10 | 92.23 | |
Toxigen | 29.77 | 39.59 |
- Downloads last month
- 326
Model tree for brittlewis12/gemma-7b-GGUF
Base model
google/gemma-7b