The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Document OCR using GLM-OCR
This dataset contains OCR results from images in biglam/rubenstein-manuscript-catalog using GLM-OCR, a compact 0.9B OCR model achieving SOTA performance.
Processing Details
- Source Dataset: biglam/rubenstein-manuscript-catalog
- Model: zai-org/GLM-OCR
- Task: text recognition
- Number of Samples: 50
- Processing Time: 5.7 min
- Processing Date: 2026-02-15 00:40 UTC
Configuration
- Image Column:
image - Output Column:
markdown - Dataset Split:
train - Batch Size: 16
- Max Model Length: 8,192 tokens
- Max Output Tokens: 8,192
- Temperature: 0.01
- Top P: 1e-05
- GPU Memory Utilization: 80.0%
Model Information
GLM-OCR is a compact, high-performance OCR model:
- 0.9B parameters
- 94.62% on OmniDocBench V1.5
- CogViT visual encoder + GLM-0.5B language decoder
- Multi-Token Prediction (MTP) loss for efficiency
- Multilingual: zh, en, fr, es, ru, de, ja, ko
- MIT licensed
Dataset Structure
The dataset contains all original columns plus:
markdown: The extracted text in markdown formatinference_info: JSON list tracking all OCR models applied to this dataset
Reproduction
uv run https://huggingface.co/datasets/uv-scripts/ocr/raw/main/glm-ocr.py \
biglam/rubenstein-manuscript-catalog \
<output-dataset> \
--image-column image \
--batch-size 16 \
--task ocr
Generated with UV Scripts
- Downloads last month
- 27