Large language models (LLMs) have opened new paradigms in optimization modeling by enabling the generation of executable solver code from natural language descriptions. Despite this promise, existing approaches typically remain solver-driven: they rely on single-pass forward generation and apply limited post-hoc fixes based on solver error messages, leaving undetected semantic errors that silently produce syntactically correct but logically flawed models. To address this challenge, we propose SAC-Opt, a backward-guided correction framework that grounds optimization modeling in problem semantics rather than solver feedback. At each step, SAC-Opt aligns the original semantic anchors with those reconstructed from the generated code and selectively corrects only the mismatched components, driving convergence toward a semantically faithful model. This anchor-driven correction enables fine-grained refinement of constraint and objective logic, enhancing both fidelity and robustness without requiring additional training or supervision. Empirical results on seven public datasets demonstrate that SAC-Opt improves average modeling accuracy by 7.7%, with gains of up to 21.9% on the ComplexLP dataset. These findings highlight the importance of semantic-anchored correction in LLM-based optimization workflows to ensure faithful translation from problem intent to solver-executable code.
翻译:大型语言模型(LLMs)通过从自然语言描述生成可执行的求解器代码,为优化建模开辟了新的范式。尽管前景广阔,现有方法通常仍以求解器为驱动:它们依赖单次前向生成,并基于求解器错误信息进行有限的后期修复,导致未检测到的语义错误持续存在,从而产生语法正确但逻辑错误的模型。为应对这一挑战,我们提出了SAC-Opt,一种基于后向引导的校正框架,它将优化建模锚定在问题语义而非求解器反馈之上。在每一步中,SAC-Opt将原始语义锚点与从生成代码重构的锚点进行对齐,并选择性地仅校正不匹配的组件,从而驱动模型向语义忠实的方向收敛。这种锚点驱动的校正实现了对约束和目标逻辑的细粒度优化,在不需额外训练或监督的情况下,提升了模型的保真度和鲁棒性。在七个公开数据集上的实验结果表明,SAC-Opt将平均建模准确率提高了7.7%,在ComplexLP数据集上最高提升达21.9%。这些发现凸显了在基于LLM的优化工作流中,语义锚定校正对于确保从问题意图到求解器可执行代码的忠实转换至关重要。