Camel-Doc-OCR-080125-GGUF

The Camel-Doc-OCR-080125 model is a fine-tuned version of Qwen2.5-VL-7B-Instruct, optimized for Document Retrieval, Content Extraction, and Analysis Recognition. Built on top of the Qwen2.5-VL architecture, this model enhances document comprehension capabilities with focused training on the Opendoc2-Analysis-Recognition dataset for superior document analysis and information extraction tasks.

Model Files

File Name Quant Type File Size
Camel-Doc-OCR-080125.f16.gguf F16 15.2 GB
Camel-Doc-OCR-080125.Q2_K.gguf Q2_K 3.02 GB
Camel-Doc-OCR-080125.Q3_K_L.gguf Q3_K_L 4.09 GB
Camel-Doc-OCR-080125.Q3_K_M.gguf Q3_K_M 3.81 GB
Camel-Doc-OCR-080125.Q3_K_S.gguf Q3_K_S 3.49 GB
Camel-Doc-OCR-080125.Q4_K_M.gguf Q4_K_M 4.68 GB
Camel-Doc-OCR-080125.Q4_K_S.gguf Q4_K_S 4.46 GB
Camel-Doc-OCR-080125.Q5_K_M.gguf Q5_K_M 5.44 GB
Camel-Doc-OCR-080125.Q5_K_S.gguf Q5_K_S 5.32 GB
Camel-Doc-OCR-080125.Q6_K.gguf Q6_K 6.25 GB
Camel-Doc-OCR-080125.Q8_0.gguf Q8_0 8.1 GB
Camel-Doc-OCR-080125.IQ4_XS.gguf IQ4_XS 4.25 GB
Camel-Doc-OCR-080125.i1-IQ1_M.gguf i1-IQ1_M 2.04 GB
Camel-Doc-OCR-080125.i1-IQ1_S.gguf i1-IQ1_S 1.9 GB
Camel-Doc-OCR-080125.i1-IQ2_M.gguf i1-IQ2_M 2.78 GB
Camel-Doc-OCR-080125.i1-IQ2_S.gguf i1-IQ2_S 2.6 GB
Camel-Doc-OCR-080125.i1-IQ2_XS.gguf i1-IQ2_XS 2.47 GB
Camel-Doc-OCR-080125.i1-IQ2_XXS.gguf i1-IQ2_XXS 2.27 GB
Camel-Doc-OCR-080125.i1-IQ3_M.gguf i1-IQ3_M 3.57 GB
Camel-Doc-OCR-080125.i1-IQ3_S.gguf i1-IQ3_S 3.5 GB
Camel-Doc-OCR-080125.i1-IQ3_XS.gguf i1-IQ3_XS 3.35 GB
Camel-Doc-OCR-080125.i1-IQ3_XXS.gguf i1-IQ3_XXS 3.11 GB
Camel-Doc-OCR-080125.i1-IQ4_NL.gguf i1-IQ4_NL 4.44 GB
Camel-Doc-OCR-080125.i1-IQ4_XS.gguf i1-IQ4_XS 4.22 GB
Camel-Doc-OCR-080125.i1-Q2_K.gguf i1-Q2_K 3.02 GB
Camel-Doc-OCR-080125.i1-Q2_K_S.gguf i1-Q2_K_S 2.83 GB
Camel-Doc-OCR-080125.i1-Q3_K_L.gguf i1-Q3_K_L 4.09 GB
Camel-Doc-OCR-080125.i1-Q3_K_M.gguf i1-Q3_K_M 3.81 GB
Camel-Doc-OCR-080125.i1-Q3_K_S.gguf i1-Q3_K_S 3.49 GB
Camel-Doc-OCR-080125.i1-Q4_0.gguf i1-Q4_0 4.44 GB
Camel-Doc-OCR-080125.i1-Q4_1.gguf i1-Q4_1 4.87 GB
Camel-Doc-OCR-080125.i1-Q4_K_M.gguf i1-Q4_K_M 4.68 GB
Camel-Doc-OCR-080125.i1-Q4_K_S.gguf i1-Q4_K_S 4.46 GB
Camel-Doc-OCR-080125.i1-Q5_K_M.gguf i1-Q5_K_M 5.44 GB
Camel-Doc-OCR-080125.i1-Q5_K_S.gguf i1-Q5_K_S 5.32 GB
Camel-Doc-OCR-080125.i1-Q6_K.gguf i1-Q6_K 6.25 GB
Camel-Doc-OCR-080125.imatrix.gguf imatrix 4.56 MB
Camel-Doc-OCR-080125.mmproj-Q8_0.gguf mmproj-Q8_0 853 MB
Camel-Doc-OCR-080125.mmproj-f16.gguf mmproj-f16 1.35 GB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
2,901
GGUF
Model size
8B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Camel-Doc-OCR-080125-GGUF

Collection including prithivMLmods/Camel-Doc-OCR-080125-GGUF