Datasets:
Size:
1M<n<10M
ArXiv:
Tags:
Document_Understanding
Document_Packet_Splitting
Document_Comprehension
Document_Classification
Document_Recognition
Document_Segmentation
DOI:
License:
fix: Revise README for accuracy, usability, and dependency compatibility
#2
by
vawsgit - opened
At a Glance
- Source: Usability audit of
amazon/doc_splitHuggingFace dataset README - Items: 10 targeted repairs + 1 dependency fix
- Categories: Documentation accuracy, Quick Start, schema docs, CLI defaults, project structure, dependency compatibility
High-Impact Changes
| # | Change | Files | Why It Matters |
|---|---|---|---|
| 1 | π Add load_dataset() Quick Start at top of README |
README.md |
Users can load the dataset in 3 lines without reading the entire toolkit README |
| 2 | π§ Fix 6 incorrect CLI default values | README.md |
--num-docs-train was documented as 8 (actual: 800), --num-docs-test as 5 (actual: 500), --num-docs-val as 2 (actual: 200) β users copying examples would get wrong results |
| 3 | π¦ Loosen tokenizers==0.20.3 pin to >=0.20.3 |
requirements.txt |
Exact pin conflicts with transformers>=4.48.0 which requires tokenizers>=0.21 β pip install fails |
| 4 | β οΈ Add Python 3.12 recommendation for textractor commands | README.md |
amazon-textract-textractor C extensions (editdistance) fail to build on Python 3.13+ |
All Changes by Category
π Quick Start & Usability
- Add
load_dataset()Quick Start section: New section at top of README with working code examples for loading splits, accessing fields, and iterating subdocuments.README.md - Add Dataset Schema section: Documents all 13 fields across 3 nesting levels (doc, subdocument, page) in table format.
README.md
π§ CLI Defaults & Parameters
- Fix
--num-docs-traindefault:8β800.README.md - Fix
--num-docs-testdefault:5β500.README.md - Fix
--num-docs-valdefault:2β200.README.md - Add note about asset script path defaults: Code defaults (
../raw_data,../processed_assets) assume running fromsrc/assets/; README examples use repo-root paths.README.md - Consolidate duplicate benchmark sections: Removed brief duplicate, kept detailed version with corrected values.
README.md
π Data Format Documentation
- Replace fictional output format with actual formats: Removed fabricated JSON example, added real Ground Truth JSON schema and CSV column reference.
README.md - Document dual asset path prefixes: JSON uses
rvl-cdip-nmp-assets/, CSV usesdata/assets/β now explained.README.md - Add asset path resolution note: Clarifies that
image_path/text_pathreference assets not included in the dataset download.README.md
ποΈ Project Structure
- Fix project structure tree:
strategies/βshuffle_strategies/, addedasset_creator.py,deepseek_ocr.py,base_strategy.py,__init__.py,models.pyfiles. Removed non-existentsplit_mapping.json.README.md
π Dataset Statistics
- Qualify document counts to specific strategy: Counts (417/96/51) are for
mono_rand/largeonly β other strategy/size combos vary significantly.README.md
π¦ Dependency Compatibility
- Loosen tokenizers pin:
tokenizers==0.20.3βtokenizers>=0.20.3. Resolves install failure with modern transformers.requirements.txt - Add Python 3.12 recommendation:
textractorC extensions fail on 3.13+; added admonitions to Requirements section and asset creation commands.README.md - Add requirements.txt note: Documents that GPU deps (PyTorch, Transformers) are only needed for DeepSeek OCR.
README.md
π§ͺ Verification
All changes were verified with 18 automated tests across two phases:
| Phase | Scope | Results |
|---|---|---|
| Phase 4: API Verification | Schema fields, CLI defaults, project structure, cross-references | β 9 PASS, 0 FAIL, 1 WARN |
| Phase 5: CLI Commands | load_dataset, snapshot_download, run.py --help, Quick Start code, LFS clone |
β 8 PASS, 0 FAIL, 0 WARN |
Total: 17 PASS, 0 FAIL, 1 WARN β the WARN (V-08) confirms document counts vary by strategy, validating the qualification added in this PR.
Key commands tested end-to-end:
load_dataset("amazon/doc_split")β all 3 splits βload_dataset("amazon/doc_split", split="test")β correct keys β- Full Quick Start code block β runs without error β
python src/benchmarks/run.py --helpβ all documented flags present βpython src/benchmarks/run.py --strategy poly_seq ...β imports resolve, fails gracefully on missing assets βpython src/assets/run.py --helpβ works with Python 3.12 + uv βGIT_LFS_SKIP_SMUDGE=1 git cloneβ .py files are real source β
Patterns & Observations
- π Defaults were copy-paste errors: The 3
--num-docs-*defaults were off by 100x (8/5/2 vs 800/500/200), suggesting they were placeholder values that were never updated after the argparse code was finalized. - π Fictional example syndrome: The original "Benchmark Output Format" JSON was entirely fabricated β it didn't match any actual file in the repo. The replacement uses verified schema from real dataset files.
- π Python 3.13 compatibility gap:
amazon-textract-textractorβeditdistanceC extension build fails on 3.13. This is an upstream issue but affects anyone following the README's install instructions on a modern Python. - π Overly strict pin:
tokenizers==0.20.3was incompatible with the repo's owntransformers>=4.48.0constraint βpip install -r requirements.txtwas broken out of the box.