Datasets:
The dataset viewer should be available soon. Please retry later.
PaveBench: A Versatile Benchmark for Pavement Distress Perception and Interactive Vision-Language Analysis
Abstract
PaveBench is a large-scale benchmark for pavement distress perception and interactive vision-language analysis on real-world highway inspection images. It supports four core tasks: classification, object detection, semantic segmentation, and vision-language question answering. On the visual side, PaveBench provides large-scale annotations on real top-down pavement images and includes a curated hard-distractor subset for robustness evaluation. On the multimodal side, it introduces PaveVQA, a real-image question answering dataset supporting single-turn, multi-turn, and expert-corrected interactions, covering recognition, localization, quantitative estimation, and maintenance reasoning.
About the Dataset
PaveBench is built on real-world highway inspection images collected in Liaoning Province, China, using a highway inspection vehicle equipped with a high-resolution line-scan camera. The captured images are top-down orthographic pavement views, which preserve the geometric properties of distress patterns and support reliable downstream quantification. The dataset provides unified annotations for multiple pavement distress tasks and is designed to connect visual perception with interactive vision-language analysis.
The visual subset, Multi-Task Visual Perception, contains 20,124 high-resolution pavement images of size 512 Γ 512. It supports:
- image classification
- object detection
- semantic segmentation
In addition, the multimodal subset, PaveVQA, contains 32,160 question-answer pairs, including:
- 10,050 single-turn queries
- 20,100 multi-turn interactions
- 2,010 error-correction pairs
These question-answer pairs cover recognition, localization, quantitative estimation, severity assessment, and maintenance recommendation.
The overall dataset statistics are summarized in the figure below.
Multi-Task Visual Perception
For the classification task, PaveBench includes six visual categories:
- Longitudinal Crack
- Transverse Crack
- Alligator Crack
- Patch
- Pothole
- Negative Sample
All images are annotated through a hierarchical multi-task pipeline, where image-level labels, instance-level bounding boxes, and pixel-level masks are constructed to support consistent evaluation across different perception settings.
A key feature of PaveBench is its curated hard-distractors. During annotation, the dataset explicitly retains visually confusing real-world patterns such as:
- pavement stains
- tree shadows
- road markings
- ...
These distractors often co-occur with real pavement distress and closely resemble true distress patterns, making the benchmark more realistic and more challenging for robustness evaluation.
PaveVQA
PaveVQA is a real-image visual question answering benchmark built on top of PaveBench. It supports:
- single-turn QA
- multi-turn dialogue
- expert-corrected interactions
The questions are designed around practical pavement inspection needs, including:
- presence verification
- distress classification
- localization
- quantitative analysis
- severity assessment
- maintenance recommendation
- ...
Structured metadata derived from visual annotations, such as bounding boxes, pixel area, and skeleton length, is used to support grounded and low-hallucination question answering.
Benchmark and Experiments
- Visual Perception Evaluation. PaveBench supports classification, detection, and segmentation under a unified benchmark and remains challenging in realistic scenes with hard distractors. For detection and segmentation, Longitudinal Crack and Transverse Crack are merged into Linear Crack, because their distinction mainly lies in global direction, whereas these two tasks focus on accurately localizing crack instances and extracting crack regions.
- Multimodal VQA Evaluation. LoRA fine-tuning significantly improves VLM performance on pavement-specific question answering.
- Agent-Augmented VQA Framework. The agent-augmented framework improves quantitative reliability by grounding VLM responses with specialized visual tools.
Dataset Tree
data/
βββ Distress_Classification/
β βββ train/
β β βββ alligator_crack/
β β βββ longitudinal_crack/
β β βββ negative/
β β βββ patch/
β β βββ pothole/
β β βββ transverse_crack/
β βββ val/ (same as train)
β βββ test/ (same as train)
β
βββ Distress_Detection/
β βββ annotations/
β β βββ instances_train.json
β β βββ instances_val.json
β β βββ instances_test.json
β βββ images/
β βββ train/
β βββ val/
β βββ test/
β
βββ Distress_PaveVQA/
β βββ images/
β βββ single_turn.jsonl
β βββ multi_turn.jsonl
β βββ correction.jsonl
β
βββ Distress_Segmentation/
βββ images/
β βββ train/
β βββ val/
β βββ test/
βββ masks/
β βββ train/
β βββ val/
β βββ test/
βββ masks_vis/ (same as mask)
βββ color_map.txt
βββ label_map.txt
Citation
If you use this dataset in your work, please cite it as:
@article{li2026pavebench,
title={PaveBench: A Versatile Benchmark for Pavement Distress Perception and Interactive Vision-Language Analysis},
author={Li, Dexiang and Che, Zhenning and Zhang, Haijun and Zhou, Dongliang and Zhang, Zhao and Han, Yahong},
journal={arXiv preprint arXiv:2604.02804},
year={2026},
url={https://arxiv.org/abs/2604.02804}
}
license: cc-by-nc-sa-4.0
- Downloads last month
- 3,779
