Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
bartowski
/
gemma2-gutenberg-27B-GGUF
like
1
Text Generation
Transformers
GGUF
jondurbin/gutenberg-dpo-v0.1
Inference Endpoints
imatrix
conversational
License:
gemma
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
b7d43e5
gemma2-gutenberg-27B-GGUF
1 contributor
History:
16 commits
bartowski
Upload gemma2-gutenberg-27B-Q3_K_XL.gguf with huggingface_hub
b7d43e5
verified
3 months ago
.gitattributes
2.56 kB
Upload gemma2-gutenberg-27B-Q3_K_XL.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-IQ4_XS.gguf
14.8 GB
LFS
Upload gemma2-gutenberg-27B-IQ4_XS.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q3_K_XL.gguf
14.8 GB
LFS
Upload gemma2-gutenberg-27B-Q3_K_XL.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q4_0.gguf
15.7 GB
LFS
Upload gemma2-gutenberg-27B-Q4_0.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q4_0_4_4.gguf
15.6 GB
LFS
Upload gemma2-gutenberg-27B-Q4_0_4_4.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q4_0_4_8.gguf
15.6 GB
LFS
Upload gemma2-gutenberg-27B-Q4_0_4_8.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q4_0_8_8.gguf
15.6 GB
LFS
Upload gemma2-gutenberg-27B-Q4_0_8_8.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q4_K_L.gguf
16.9 GB
LFS
Upload gemma2-gutenberg-27B-Q4_K_L.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q4_K_M.gguf
16.6 GB
LFS
Upload gemma2-gutenberg-27B-Q4_K_M.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q4_K_S.gguf
15.7 GB
LFS
Upload gemma2-gutenberg-27B-Q4_K_S.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q5_K_L.gguf
19.7 GB
LFS
Upload gemma2-gutenberg-27B-Q5_K_L.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q5_K_M.gguf
19.4 GB
LFS
Upload gemma2-gutenberg-27B-Q5_K_M.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q5_K_S.gguf
18.9 GB
LFS
Upload gemma2-gutenberg-27B-Q5_K_S.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q6_K.gguf
22.3 GB
LFS
Upload gemma2-gutenberg-27B-Q6_K.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q6_K_L.gguf
22.6 GB
LFS
Upload gemma2-gutenberg-27B-Q6_K_L.gguf with huggingface_hub
3 months ago
gemma2-gutenberg-27B-Q8_0.gguf
28.9 GB
LFS
Upload gemma2-gutenberg-27B-Q8_0.gguf with huggingface_hub
3 months ago