Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
some1nostr
/
Ostrich-70B
like
5
Text Generation
GGUF
Inference Endpoints
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Deploy
Use this model
20cb790
Ostrich-70B
1 contributor
History:
159 commits
some1nostr
Delete ostrich-70b-9320-IQ1_S.gguf
20cb790
verified
6 months ago
.gitattributes
7.33 kB
Upload ostrich-70b-9880-IQ1_S.gguf
6 months ago
README.md
2.12 kB
Update README.md
6 months ago
ostrich-70b-8314-Q6_K-00001-of-00002.gguf
42.8 GB
LFS
Upload 2 files
6 months ago
ostrich-70b-8314-Q6_K-00002-of-00002.gguf
15.1 GB
LFS
Upload 2 files
6 months ago
ostrich-70b-9320-IQ4_XS.gguf
37.9 GB
LFS
Upload ostrich-70b-9320-IQ4_XS.gguf
6 months ago
ostrich-70b-9320-Q5_K_M.gguf
50 GB
LFS
Upload ostrich-70b-9320-Q5_K_M.gguf
6 months ago
ostrich-70b-9320-q8_0-00001-of-00002.gguf
43 GB
LFS
Upload 2 files
6 months ago
ostrich-70b-9320-q8_0-00002-of-00002.gguf
32 GB
LFS
Upload 2 files
6 months ago
ostrich-70b-9406-Q4_0.gguf
40 GB
LFS
Upload ostrich-70b-9406-Q4_0.gguf
6 months ago
ostrich-70b-9638-IQ1_S.gguf
15.3 GB
LFS
Upload ostrich-70b-9638-IQ1_S.gguf
6 months ago
ostrich-70b-9638-IQ2_XS.gguf
21.1 GB
LFS
Rename ostrich-70b-9638-iq2xs.gguf to ostrich-70b-9638-IQ2_XS.gguf
6 months ago
ostrich-70b-9638-IQ3_XS.gguf
29.3 GB
LFS
Upload ostrich-70b-9638-IQ3_XS.gguf
6 months ago
ostrich-70b-9638-Q4_0.gguf
40 GB
LFS
Rename ostrich-70b-9638-q4.gguf to ostrich-70b-9638-Q4_0.gguf
6 months ago
ostrich-70b-9880-IQ1_S.gguf
15.3 GB
LFS
Upload ostrich-70b-9880-IQ1_S.gguf
6 months ago
ostrich-70b-9880-Q4_0.gguf
40 GB
LFS
Upload ostrich-70b-9880-Q4_0.gguf
6 months ago