CurveBench
CurveBench is the Hard set of CurveBench, a benchmark designed to evaluate the topological reasoning capabilities of large vision-language models (VLMs).
Each sample is a hand-drawn image of disjoint curves paired with the exact rooted-tree structure that encodes the nestedness relationships visible in the image.
Sister dataset: the foundational Easy category (fewer than 6 curves, exhaustive over all rooted trees with up to 6 nodes) lives in AmirMohseni/CurveBench-Easy.
What is CurveBench?
To the best of our knowledge, CurveBench is the first dataset explicitly designed to benchmark the topological reasoning capabilities of VLMs by mapping visual containment to exact combinatorial structures. While existing datasets often evaluate semantic segmentation or geometric object detection, CurveBench isolates containment and separation as the core signals for visual reasoning.
A model is asked to infer a global topological structure — specifically, a rooted tree where:
- each node represents a contiguous bounded region, and
- each edge denotes the boundary curve that separates two adjacent regions (parent contains child).
The full benchmark contains 756 rigorously hand-drawn images across five categories, split across this dataset and its sister dataset:
| Category | Count | Dataset |
|---|---|---|
| Easy | 300 | AmirMohseni/CurveBench-Easy |
| Polygon | 199 | this dataset |
| Topographical | 100 | this dataset |
| Maze | 100 | this dataset |
| Counting | 57 | this dataset |
Categories (Hard set)
Polygon (199 images)
Following a systematic construction methodology identical to the Easy category, this subset restricts the geometries entirely to non-intersecting polygons. This tests a model's robustness to sharp angles and piecewise-linear boundaries compared to smooth, continuous Jordan curves.
Topographical (100 images)
Grounded in applied distributions, these images are directly inspired by real-world topographical maps. They mimic the natural behaviour of elevation level sets, bridging the gap between theoretical combinatorial benchmarks and practical visual understanding domains.
Maze (100 images)
Designed to stress-test long-range spatial reasoning, this category features highly convoluted, labyrinthine curves with deep nesting. The spatial entanglement makes distinguishing the interior from the exterior of a boundary visually demanding, forcing models to track complex geometric boundaries over long distances.
Counting (57 images)
This densely populated subset evaluates a model's scalability and capacity limits. Focused primarily on the volume of nested entities, these images are packed with a high number of disjoint curves, challenging the model to construct massive corresponding rooted trees without accumulating structural or logical errors.
Dataset structure
Splits
| Split name | Size | Description |
|---|---|---|
polygon |
199 | Piecewise-linear polygon boundaries |
topographical |
100 | Topographic-map-inspired curves |
maze |
100 | Labyrinthine, deeply nested curves |
counting |
57 | High-density curve configurations |
combined |
456 | All four categories merged |
Fields
| Field | Type | Description |
|---|---|---|
image |
Image |
The hand-drawn image of disjoint curves (PNG) |
category |
string |
One of "Counting", "Maze", "Polygon", "Topographical" |
filename |
string |
Original filename of the image |
num_nodes |
int32 |
Number of nodes in the rooted tree (including the implicit root = outer region) |
tree |
string |
Stringified edge list, e.g. "[(0, 1), (0, 2), (1, 3)]" — each tuple is (parent, child) (0 = outermost/root region) |
Example row
{
"image": <PIL.Image>,
"category": "Counting",
"filename": "1.PNG",
"num_nodes": 26,
"tree": "[(0, 1), (0, 6), (1, 2), (2, 3), (3, 4), (4, 5), (6, 7), ...]",
}
To parse the tree field back to a list of tuples:
import ast
edges = ast.literal_eval(sample["tree"]) # list of (parent, child) int tuples
Ground truth generation
Ground-truth trees were produced using an automated OpenCV contour-based extraction pipeline that traces the boundary curves in each image and assembles the parent–child containment relationships into a rooted tree. Every annotation was subsequently human-verified to ensure structural correctness.
The extraction scripts are publicly available at:
https://github.com/Amir-Mohseni/CurveBench
Evaluation
Each predicted tree is compared to the ground truth using tree isomorphism (via NetworkX): a prediction receives full credit only if the predicted edge set is structurally identical to the ground-truth tree up to node relabelling. This provides a deterministic, binary evaluation metric that admits no partial credit for structurally incorrect trees.
Usage
from datasets import load_dataset
# Load a single category
ds = load_dataset("AmirMohseni/CurveBench", split="polygon")
# Load everything
ds_all = load_dataset("AmirMohseni/CurveBench", split="combined")
# Inspect a sample
sample = ds[0]
print(sample["category"]) # e.g. "Polygon"
print(sample["num_nodes"]) # e.g. 5
print(sample["tree"]) # e.g. "[(0, 1), (1, 2), ...]"
sample["image"].show() # PIL Image
A ready-made evaluation environment (Prime Intellect Verifiers) is available at amirmohseni/curvebench-env.
Citation
TO BE ADDED
- Downloads last month
- 27
