Modeling Copilots for Text-to-Model Translation
There is growing interest in leveraging large language models (LLMs) for text-to-model translation and optimization tasks. This paper aims to advance this line of research by introducing Text2Model and Text2Zinc. Text2Model is a suite of copilots based on several LLM strategies with varying complexity, along with an online leaderboard. Text2Zinc is a cross-domain dataset for capturing optimization and satisfaction problems specified in natural language, along with an interactive editor with built-in AI assistant. While there is an emerging literature on using LLMs for translating combinatorial problems into formal models, our work is the first attempt to integrate both satisfaction and optimization problems within a unified architecture and dataset. Moreover, our approach is solver-agnostic unlike existing work that focuses on translation to a solver-specific model. To achieve this, we leverage MiniZinc's solver-and-paradigm-agnostic modeling capabilities to formulate combinatorial problems. We conduct comprehensive experiments to compare execution and solution accuracy across several single- and multi-call strategies, including; zero-shot prompting, chain-of-thought reasoning, intermediate representations via knowledge-graphs, grammar-based syntax encoding, and agentic approaches that decompose the model into sequential sub-tasks. Our copilot strategies are competitive, and in parts improve, recent research in this domain. Our findings indicate that while LLMs are promising they are not yet a push-button technology for combinatorial modeling. We contribute Text2Model copilots and leaderboard, and Text2Zinc and interactive editor to open-source to support closing this performance gap.
