ISO-Bench: Can Coding Agents Optimize Real-World Inference Workloads?
Abstract
ISO-Bench evaluates coding agents on real-world LLM inference optimization tasks from popular serving frameworks, using combined execution and LLM-based metrics to assess performance.
We introduce ISO-Bench, a benchmark for coding agents to test their capabilities on real-world inference optimization tasks. These tasks were taken from vLLM and SGLang, two of the most popular LLM serving frameworks. Each task provides an agent with a codebase and bottleneck description, whereby the agent must produce an optimization patch evaluated against expert human solutions. We curated 54 tasks from merged pull requests with measurable performance improvements. While existing benchmarks heavily use runtime-based metrics, such approaches can be gamed to pass tests without capturing the actual intent of the code changes. Therefore, we combine both hard (execution-based) and soft (LLM-based) metrics to show that both are necessary for complete evaluation. While evaluating both closed and open-source coding agents, we find no single agent dominates across codebases. Surprisingly, agents often identify correct bottlenecks but fail to execute working solutions. We also show that agents with identical underlying models differ substantially, suggesting scaffolding is as important as the model.
Community
We built ISO-Bench: 54 real optimization tasks from vLLM and SGLang and found that agents often understand the problem but can't execute the fix.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development (2026)
- IDE-Bench: Evaluating Large Language Models as IDE Agents on Real-World Software Engineering Tasks (2026)
- SWE-AGI: Benchmarking Specification-Driven Software Construction with MoonBit in the Era of Autonomous Agents (2026)
- FeatureBench: Benchmarking Agentic Coding for Complex Feature Development (2026)
- DevOps-Gym: Benchmarking AI Agents in Software DevOps Cycle (2026)
- OmniCode: A Benchmark for Evaluating Software Engineering Agents (2026)
- ProjDevBench: Benchmarking AI Coding Agents on End-to-End Project Development (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper