fix: Revise README for accuracy, usability, and dependency compatibility

#2
by vawsgit - opened
Files changed (1) hide show
  1. README.md +158 -53
README.md CHANGED
@@ -18,6 +18,32 @@ size_categories:
18
 
19
  **In addition to the dataset, we release this repository containing the complete toolkit for generating the benchmark datasets, along with Jupyter notebooks for data analysis.**
20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  # DocSplit: Document Packet Splitting Benchmark Generator
22
 
23
  A toolkit for creating benchmark datasets to test document packet splitting systems. Document packet splitting is the task of separating concatenated multi-page documents into individual documents with correct page ordering.
@@ -30,6 +56,26 @@ This toolkit generates five benchmark datasets of varying complexity to test how
30
  2. **Classify document types** accurately
31
  3. **Reconstruct correct page ordering** within each document
32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  ## Document Source
34
 
35
  We uses the documents from **RVL-CDIP-N-MP**:
@@ -128,8 +174,12 @@ pip install -r requirements.txt
128
 
129
  Convert raw PDFs into structured assets with page images (300 DPI PNG) and OCR text (Markdown).
130
 
 
 
131
  #### Option A: AWS Textract OCR (Default)
132
 
 
 
133
  Best for English documents. Processes all document categories with Textract.
134
 
135
  ```bash
@@ -236,48 +286,76 @@ Explore the toolkit with Jupyter notebooks:
236
  2. **`notebooks/02_create_benchmarks.ipynb`** - Generate benchmarks with different strategies
237
  3. **`notebooks/03_analyze_benchmarks.ipynb`** - Analyze and visualize benchmark statistics
238
 
239
- ## Benchmark Output Format
 
 
 
 
240
 
241
- Each benchmark JSON contains:
242
 
243
  ```json
244
  {
245
- "benchmark_name": "poly_seq",
246
- "strategy": "PolySeq",
247
- "split": "train",
248
- "created_at": "2026-01-30T12:00:00",
249
- "documents": [
250
  {
251
- "spliced_doc_id": "splice_0001",
252
- "source_documents": [
253
- {"doc_type": "invoice", "doc_name": "doc1", "pages": [1,2,3]},
254
- {"doc_type": "letter", "doc_name": "doc2", "pages": [1,2]}
255
- ],
256
- "ground_truth": [
257
- {"page_num": 1, "doc_type": "invoice", "source_doc": "doc1", "source_page": 1},
258
- {"page_num": 2, "doc_type": "invoice", "source_doc": "doc1", "source_page": 2},
259
- ...
260
- ],
261
- "total_pages": 5
 
 
262
  }
263
- ],
264
- "statistics": {
265
- "total_spliced_documents": 1000,
266
- "total_pages": 7500,
267
- "unique_doc_types": 16
268
- }
269
  }
270
  ```
271
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
272
  ## Requirements
273
 
274
- - Python 3.8+
275
  - AWS credentials (for Textract OCR)
276
- - Dependencies: `boto3`, `loguru`, `pymupdf`, `pillow`
 
 
 
 
277
 
278
  ---
279
 
280
- ### Generate Benchmark Datasets
281
 
282
  ```bash
283
  # 1. Download and extract RVL-CDIP-N-MP source data from HuggingFace (1.25 GB)
@@ -292,7 +370,7 @@ cd ../..
292
 
293
  # 2. Create assets from raw PDFs
294
  # Extracts each page as PNG image and runs OCR to get text
295
- # These assets are then used in step 4 to create benchmark datasets
296
  # Output: Structured assets in data/assets/ with images and text per page
297
  python src/assets/run.py --raw-data-path data/raw_data --output-path data/assets
298
 
@@ -353,46 +431,73 @@ The toolkit generates five benchmarks of increasing complexity, based on the Doc
353
  - **Challenge**: Worst-case scenario with no structural assumptions
354
  - **Use Case**: Document management system failures or emergency recovery
355
 
 
 
 
 
 
 
 
 
 
356
 
357
  ## Project Structure
358
 
359
  ```
360
  doc-split-benchmark/
361
  ├── README.md
362
- ├── requirements.txt # All dependencies
363
  ├── src/
364
- │ ├── assets/ # Asset creation from PDFs
365
- │ │ ├── run.py # Main script
366
- │ │ ├── models.py # Document models
 
367
  │ │ └── services/
 
 
 
 
368
  │ │ ├── pdf_loader.py
369
- │ │ ── textract_ocr.py
370
- │ │ └── asset_writer.py
371
  │ │
372
- │ └── benchmarks/ # Benchmark generation
373
- │ ├── run.py # Main script
374
- │ ├── models.py # Benchmark models
 
375
  │ └── services/
 
376
  │ ├── asset_loader.py
377
  │ ├── split_manager.py
378
  │ ├── benchmark_generator.py
379
  │ ├── benchmark_writer.py
380
- │ └── strategies/
381
- │ ├── mono_seq.py # DocSplit-Mono-Seq
382
- │ ├── mono_rand.py # DocSplit-Mono-Rand
383
- │ ├── poly_seq.py # DocSplit-Poly-Seq
384
- │ ├── poly_int.py # DocSplit-Poly-Int
385
- ── poly_rand.py # DocSplit-Poly-Rand
 
 
386
 
387
- ├── notebooks/ # Interactive examples
388
  │ ├── 01_create_assets.ipynb
389
  │ ├── 02_create_benchmarks.ipynb
390
  │ └── 03_analyze_benchmarks.ipynb
391
 
392
- ── data/ # Generated data (not in repo)
393
- ── raw_data/ # Downloaded PDFs
394
- ├── assets/ # Extracted images + OCR
395
- ── benchmarks/ # Generated benchmarks
 
 
 
 
 
 
 
 
 
 
396
  ```
397
 
398
  ### Generate Benchmarks [Detailed]
@@ -405,8 +510,8 @@ python src/benchmarks/run.py \
405
  --assets-path data/assets \
406
  --output-path data/benchmarks \
407
  --num-docs-train 800 \
408
- --num-docs-test 200 \
409
- --num-docs-val 500 \
410
  --size small \
411
  --random-seed 42
412
  ```
@@ -415,9 +520,9 @@ python src/benchmarks/run.py \
415
  - `--strategy`: Benchmark strategy - `mono_seq`, `mono_rand`, `poly_seq`, `poly_int`, `poly_rand`, or `all` (default: all)
416
  - `--assets-path`: Directory containing assets from Step 1 (default: data/assets)
417
  - `--output-path`: Where to save benchmarks (default: data/benchmarks)
418
- - `--num-docs-train`: Number of spliced documents for training (default: 8)
419
- - `--num-docs-test`: Number of spliced documents for testing (default: 5)
420
- - `--num-docs-val`: Number of spliced documents for validation (default: 2)
421
  - `--size`: Benchmark size - `small` (5-20 pages) or `large` (20-500 pages) (default: small)
422
  - `--split-mapping`: Path to split mapping JSON (default: data/metadata/split_mapping.json)
423
  - `--random-seed`: Seed for reproducibility (default: 42)
 
18
 
19
  **In addition to the dataset, we release this repository containing the complete toolkit for generating the benchmark datasets, along with Jupyter notebooks for data analysis.**
20
 
21
+ ## Quick Start: Load the Dataset
22
+
23
+ ```python
24
+ from datasets import load_dataset
25
+
26
+ # Load all splits
27
+ ds = load_dataset("amazon/doc_split")
28
+
29
+ # Or load a single split
30
+ test = load_dataset("amazon/doc_split", split="test")
31
+ ```
32
+
33
+ Each row represents a spliced document packet:
34
+
35
+ ```python
36
+ doc = ds["train"][0]
37
+ print(doc["doc_id"]) # UUID for this packet
38
+ print(doc["total_pages"]) # Total pages in the packet
39
+ print(len(doc["subdocuments"])) # Number of constituent documents
40
+
41
+ for sub in doc["subdocuments"]:
42
+ print(f" {sub['doc_type_id']}: {len(sub['page_ordinals'])} pages")
43
+ ```
44
+
45
+ > **Note:** The `image_path` and `text_path` fields in each page reference assets that are not included in the dataset download. See [Data Formats](#data-formats) for details.
46
+
47
  # DocSplit: Document Packet Splitting Benchmark Generator
48
 
49
  A toolkit for creating benchmark datasets to test document packet splitting systems. Document packet splitting is the task of separating concatenated multi-page documents into individual documents with correct page ordering.
 
56
  2. **Classify document types** accurately
57
  3. **Reconstruct correct page ordering** within each document
58
 
59
+ ## Dataset Schema
60
+
61
+ When loaded via `load_dataset()`, each row contains:
62
+
63
+ | Field | Type | Description |
64
+ |-------|------|-------------|
65
+ | `doc_id` | string | UUID identifying the spliced packet |
66
+ | `total_pages` | int | Total number of pages in the packet |
67
+ | `subdocuments` | list | Array of constituent documents |
68
+
69
+ Each subdocument contains:
70
+
71
+ | Field | Type | Description |
72
+ |-------|------|-------------|
73
+ | `doc_type_id` | string | Document type category |
74
+ | `local_doc_id` | string | Identifier within the packet |
75
+ | `group_id` | string | Group identifier |
76
+ | `page_ordinals` | list[int] | Page positions within the packet |
77
+ | `pages` | list | Per-page metadata (image_path, text_path, original_doc_name) |
78
+
79
  ## Document Source
80
 
81
  We uses the documents from **RVL-CDIP-N-MP**:
 
174
 
175
  Convert raw PDFs into structured assets with page images (300 DPI PNG) and OCR text (Markdown).
176
 
177
+ > **Note:** The code defaults for `--raw-data-path` (`../raw_data`) and `--output-path` (`../processed_assets`) assume running from within `src/assets/`. When running from the repo root, pass explicit paths as shown below.
178
+
179
  #### Option A: AWS Textract OCR (Default)
180
 
181
+ > **⚠️ Requires Python 3.12:** This command uses `amazon-textract-textractor`, which has C extension dependencies that may not build on Python 3.13+. See [Requirements](#requirements).
182
+
183
  Best for English documents. Processes all document categories with Textract.
184
 
185
  ```bash
 
286
  2. **`notebooks/02_create_benchmarks.ipynb`** - Generate benchmarks with different strategies
287
  3. **`notebooks/03_analyze_benchmarks.ipynb`** - Analyze and visualize benchmark statistics
288
 
289
+ ## Data Formats
290
+
291
+ The dataset provides two complementary formats for each benchmark:
292
+
293
+ ### Ground Truth JSON (used by `load_dataset`)
294
 
295
+ One JSON file per document packet in `datasets/{strategy}/{size}/ground_truth_json/{split}/`:
296
 
297
  ```json
298
  {
299
+ "doc_id": "...",
300
+ "total_pages": ...,
301
+ "subdocuments": [
 
 
302
  {
303
+ "doc_type_id": "...",
304
+ "local_doc_id": "...",
305
+ "group_id": "...",
306
+ "page_ordinals": [...],
307
+ "pages": [
308
+ {
309
+ "page": 1,
310
+ "original_doc_name": "...",
311
+ "image_path": "rvl-cdip-nmp-assets/...",
312
+ "text_path": "rvl-cdip-nmp-assets/...",
313
+ "local_doc_id_page_ordinal": ...
314
+ }
315
+ ]
316
  }
317
+ ]
 
 
 
 
 
318
  }
319
  ```
320
 
321
+ ### CSV (flat row-per-page format)
322
+
323
+ One CSV per split in `datasets/{strategy}/{size}/`:
324
+
325
+ | Column | Description |
326
+ |--------|-------------|
327
+ | `doc_type` | Document type category |
328
+ | `original_doc_name` | Source document filename |
329
+ | `parent_doc_name` | UUID of the spliced packet (matches `doc_id` in JSON) |
330
+ | `local_doc_id` | Local identifier within the packet |
331
+ | `page` | Page number within the packet |
332
+ | `image_path` | Path to page image (prefix: `data/assets/`) |
333
+ | `text_path` | Path to OCR text (prefix: `data/assets/`) |
334
+ | `group_id` | Group identifier |
335
+ | `local_doc_id_page_ordinal` | Page ordinal within the original source document |
336
+
337
+ ### Asset Paths
338
+
339
+ The image and text paths in both formats reference assets that are **not included** in this repository:
340
+
341
+ - JSON paths use prefix `rvl-cdip-nmp-assets/`
342
+ - CSV paths use prefix `data/assets/`
343
+
344
+ To resolve these paths, run the asset creation pipeline (see [Create Assets](#step-1-create-assets)). The data can be used for metadata and label analysis without the actual images.
345
+
346
  ## Requirements
347
 
348
+ - Python 3.12+ recommended (see note below)
349
  - AWS credentials (for Textract OCR)
350
+ - Dependencies: `pip install -r requirements.txt`
351
+
352
+ > **⚠️ Python Version:** The `amazon-textract-textractor` package (required by `src/assets/run.py`) depends on C extensions (`editdistance`) that may fail to build on Python 3.13+. **Python 3.12 is recommended.** Using [uv](https://docs.astral.sh/uv/) as your package installer can also help resolve build issues.
353
+
354
+ > **Note:** `requirements.txt` currently includes GPU dependencies (PyTorch, Transformers) that are only needed for DeepSeek OCR on multilingual documents. If you only need Textract OCR or want to explore the pre-generated data, the core dependencies are: `boto3`, `loguru`, `pymupdf`, `pillow`, `pydantic`, `amazon-textract-textractor`, `tenacity`.
355
 
356
  ---
357
 
358
+ ### Download Source Data and Generate Benchmarks
359
 
360
  ```bash
361
  # 1. Download and extract RVL-CDIP-N-MP source data from HuggingFace (1.25 GB)
 
370
 
371
  # 2. Create assets from raw PDFs
372
  # Extracts each page as PNG image and runs OCR to get text
373
+ # These assets are then used in step 3 to create benchmark datasets
374
  # Output: Structured assets in data/assets/ with images and text per page
375
  python src/assets/run.py --raw-data-path data/raw_data --output-path data/assets
376
 
 
431
  - **Challenge**: Worst-case scenario with no structural assumptions
432
  - **Use Case**: Document management system failures or emergency recovery
433
 
434
+ ### Dataset Statistics
435
+
436
+ The pre-generated benchmarks include train, test, and validation splits in both `small` (5–20 pages per packet) and `large` (20–500 pages per packet) sizes. For `mono_rand/large`:
437
+
438
+ | Split | Document Count |
439
+ |-------|---------------|
440
+ | Train | 417 |
441
+ | Test | 96 |
442
+ | Validation | 51 |
443
 
444
  ## Project Structure
445
 
446
  ```
447
  doc-split-benchmark/
448
  ├── README.md
449
+ ├── requirements.txt
450
  ├── src/
451
+ │ ├── assets/ # Asset creation from PDFs
452
+ │ │ ├── __init__.py
453
+ │ │ ├── models.py
454
+ │ │ ├── run.py # Main entry point
455
  │ │ └── services/
456
+ │ │ ├── __init__.py
457
+ │ │ ├── asset_creator.py
458
+ │ │ ├── asset_writer.py
459
+ │ │ ├── deepseek_ocr.py
460
  │ │ ├── pdf_loader.py
461
+ │ │ ── textract_ocr.py
 
462
  │ │
463
+ │ └── benchmarks/ # Benchmark generation
464
+ │ ├── __init__.py
465
+ │ ├── models.py
466
+ │ ├── run.py # Main entry point
467
  │ └── services/
468
+ │ ├── __init__.py
469
  │ ├── asset_loader.py
470
  │ ├── split_manager.py
471
  │ ├── benchmark_generator.py
472
  │ ├── benchmark_writer.py
473
+ │ └── shuffle_strategies/
474
+ │ ├── __init__.py
475
+ │ ├── base_strategy.py
476
+ │ ├── mono_seq.py
477
+ │ ├── mono_rand.py
478
+ ── poly_seq.py
479
+ │ ├── poly_int.py
480
+ │ └── poly_rand.py
481
 
482
+ ├── notebooks/
483
  │ ├── 01_create_assets.ipynb
484
  │ ├── 02_create_benchmarks.ipynb
485
  │ └── 03_analyze_benchmarks.ipynb
486
 
487
+ ── datasets/ # Pre-generated benchmark data
488
+ │ └── {strategy}/{size}/
489
+ ├── train.csv
490
+ │ ├── test.csv
491
+ │ ├── validation.csv
492
+ │ └── ground_truth_json/
493
+ │ ├── train/*.json
494
+ │ ├── test/*.json
495
+ │ └── validation/*.json
496
+
497
+ └── data/ # Generated by toolkit (not in repo)
498
+ ├── raw_data/
499
+ ├── assets/
500
+ └── benchmarks/
501
  ```
502
 
503
  ### Generate Benchmarks [Detailed]
 
510
  --assets-path data/assets \
511
  --output-path data/benchmarks \
512
  --num-docs-train 800 \
513
+ --num-docs-test 500 \
514
+ --num-docs-val 200 \
515
  --size small \
516
  --random-seed 42
517
  ```
 
520
  - `--strategy`: Benchmark strategy - `mono_seq`, `mono_rand`, `poly_seq`, `poly_int`, `poly_rand`, or `all` (default: all)
521
  - `--assets-path`: Directory containing assets from Step 1 (default: data/assets)
522
  - `--output-path`: Where to save benchmarks (default: data/benchmarks)
523
+ - `--num-docs-train`: Number of spliced documents for training (default: 800)
524
+ - `--num-docs-test`: Number of spliced documents for testing (default: 500)
525
+ - `--num-docs-val`: Number of spliced documents for validation (default: 200)
526
  - `--size`: Benchmark size - `small` (5-20 pages) or `large` (20-500 pages) (default: small)
527
  - `--split-mapping`: Path to split mapping JSON (default: data/metadata/split_mapping.json)
528
  - `--random-seed`: Seed for reproducibility (default: 42)