Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
TheBloke
/
SG-Raccoon-Yi-55B-GGUF
like
4
Text Generation
Transformers
GGUF
yi
conversational
License:
yi-license
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
dbc55ee
SG-Raccoon-Yi-55B-GGUF
1 contributor
History:
14 commits
TheBloke
GGUF model commit (made with llama.cpp commit 1f5cd83)
dbc55ee
12 months ago
.gitattributes
2.38 kB
Upload in splits of max 50GB due to HF 50GB limit. (made with llama.cpp commit 1f5cd83)
12 months ago
config.json
26 Bytes
GGUF model commit (made with llama.cpp commit 1f5cd83)
12 months ago
sg-raccoon-yi-55b.Q2_K.gguf
23.4 GB
LFS
GGUF model commit (made with llama.cpp commit 1f5cd83)
12 months ago
sg-raccoon-yi-55b.Q3_K_L.gguf
29.3 GB
LFS
GGUF model commit (made with llama.cpp commit 1f5cd83)
12 months ago
sg-raccoon-yi-55b.Q3_K_M.gguf
26.8 GB
LFS
GGUF model commit (made with llama.cpp commit 1f5cd83)
12 months ago
sg-raccoon-yi-55b.Q3_K_S.gguf
24.1 GB
LFS
GGUF model commit (made with llama.cpp commit 1f5cd83)
12 months ago
sg-raccoon-yi-55b.Q4_0.gguf
31.4 GB
LFS
GGUF model commit (made with llama.cpp commit 1f5cd83)
12 months ago
sg-raccoon-yi-55b.Q4_K_M.gguf
33.3 GB
LFS
GGUF model commit (made with llama.cpp commit 1f5cd83)
12 months ago
sg-raccoon-yi-55b.Q4_K_S.gguf
31.5 GB
LFS
GGUF model commit (made with llama.cpp commit 1f5cd83)
12 months ago
sg-raccoon-yi-55b.Q5_0.gguf
38.3 GB
LFS
GGUF model commit (made with llama.cpp commit 1f5cd83)
12 months ago
sg-raccoon-yi-55b.Q5_K_M.gguf
39.3 GB
LFS
GGUF model commit (made with llama.cpp commit 1f5cd83)
12 months ago
sg-raccoon-yi-55b.Q5_K_S.gguf
38.3 GB
LFS
GGUF model commit (made with llama.cpp commit 1f5cd83)
12 months ago
sg-raccoon-yi-55b.Q6_K.gguf
45.6 GB
LFS
GGUF model commit (made with llama.cpp commit 1f5cd83)
12 months ago
sg-raccoon-yi-55b.Q8_0.gguf-split-a
29.5 GB
LFS
Upload in splits of max 50GB due to HF 50GB limit. (made with llama.cpp commit 1f5cd83)
12 months ago
sg-raccoon-yi-55b.Q8_0.gguf-split-b
29.5 GB
LFS
Upload in splits of max 50GB due to HF 50GB limit. (made with llama.cpp commit 1f5cd83)
12 months ago