image image |
|---|
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Wild-OmniDocBench
A Real-World Captured Document Parsing Benchmark for Robustness Evaluation
中文版 • Paper • GitHub • HuggingFace
Overview
Wild-OmniDocBench is a benchmark for evaluating document parsing robustness under real-world captured conditions. It is derived from OmniDocBench by converting scanned/digital documents into naturally captured images through controlled physical simulation, including printing, deformation, and photography under diverse lighting conditions.
Unlike standard benchmarks that rely on clean scanned or digital-born pages, Wild-OmniDocBench introduces realistic artifacts such as:
- Geometric distortions (perspective shifts, bends, wrinkles)
- Illumination variations (directional, uneven, low-light)
- Screen capture artifacts (moire patterns, reflections)
- Environmental interference (background overlays, shadows)
Note: The current release of Wild-OmniDocBench corresponds to OmniDocBench v1.5. We are currently processing the extended portions for v1.6 and will release them in a future update.
Benchmark Statistics
| Item | Details |
|---|---|
| Total Images | 1,350 |
| Source | Real-world captured variant of OmniDocBench |
| Document Types | Books, Textbooks, Papers, PPTs, Newspapers, Notes, Exams, Magazines, Financial Reports, etc. |
| Capture Methods | (i) Print + physical deformation + photography; (ii) Screen display + re-capture |
| Annotations | Inherited from OmniDocBench (full structural and reading-order annotations) |
Data Format
Directory Structure
Wild_OmniDocBench/
├── README.md # English README
├── README_ZH.md # Chinese README
├── wild_omnidocbench.zip # Benchmark images (1,350 JPGs)
└── assets/
└── overview.png # Overview figure
Images
After unzipping wild_omnidocbench.zip, images are named following the OmniDocBench convention:
{doc_type}_{language}_{source}_{page}.jpg
For example: book_en_A.Concise.Introduction.to.Linear.Algebra_page_065.jpg
Evaluation
Wild-OmniDocBench uses the same annotation format and evaluation protocol as OmniDocBench. To evaluate on Wild-OmniDocBench:
Obtain annotations and evaluation scripts from the official OmniDocBench repository:
https://github.com/opendatalab/OmniDocBenchReplace the image source with Wild-OmniDocBench images (from
wild_omnidocbench.zip).Run evaluation following the OmniDocBench protocol. Metrics include:
- Overall Score (↑)
- Text Edit Distance (↓)
- Formula CDM (↑)
- Table TEDS (↑)
- Reading Order Edit Distance (↓)
Key Results
Performance degradation from OmniDocBench to Wild-OmniDocBench (from the DocHumming paper):
| Model | Type | Overall (Origin) | Overall (Wild) | Degradation |
|---|---|---|---|---|
| DocHumming (1B) | End2End | 93.75 | 87.03 | −6.72 |
| dots.ocr (3B) | End2End | 88.41 | 78.01 | −10.40 |
| Qwen3-VL (235B) | General | 89.15 | 79.69 | −9.46 |
| MinerU2.5 (1.2B) | Modular | 90.67 | 70.91 | −19.76 |
| PaddleOCR-VL (0.9B) | Modular | 91.93 | 72.19 | −19.74 |
End-to-end models exhibit significantly less degradation than modular cascaded pipelines under real-world capture conditions.
Citation
@misc{li2026towardsrealworlddocument,
title={Towards Real-World Document Parsing via Realistic Scene Synthesis and Document-Aware Training},
author={Gengluo Li and Pengyuan Lyu and Chengquan Zhang and Huawen Shen and Liang Wu and Xingyu Wan and Gangyan Zeng and Han Hu and Can Ma and Yu Zhou},
year={2026},
journal={arXiv preprint arXiv:2603.23885},
url={https://arxiv.org/abs/2603.23885},
}
Acknowledgements
Wild-OmniDocBench is built upon OmniDocBench. We thank the OmniDocBench team for providing the original annotations and evaluation framework.
License
This benchmark is released for research purposes only.
- Downloads last month
- -