Despite their widespread adoption in various domains, especially due to their powerful reasoning capabilities, Large Language Models (LLMs) are not the off-the-shelf choice to drive multi-objective optimization yet. Conventional strategies rank high in benchmarks due to their intrinsic capabilities to handle numerical inputs and careful modelling choices that balance exploration and Pareto-front exploitation, as well as handle multiple (conflicting) objectives. In this paper, we close this gap by leveraging LLMs as surrogate models and candidate samplers inside a structured hierarchical search strategy. By adaptively partitioning the input space into disjoint hyperrectangular regions and ranking them with a composite score function, we restrict the generative process of the LLM to specific, high-potential sub-spaces, hence making the problem easier to solve as the LLM doesn't have to reason about the global structure of the problem, but only locally instead. We show that under standard regularity assumptions, our algorithm generates candidate solutions that converge to the true Pareto set in Hausdorff distance. Empirically, it consistently outperforms the global LLM-based multi-objective optimizer and is on par with standard evolutionary and Bayesian optimization algorithm on synthetic and real-world benchmarks.
翻译:尽管大型语言模型(LLMs)因其强大的推理能力已在多个领域得到广泛应用,但目前尚不能作为现成的解决方案来驱动多目标优化。传统策略因其处理数值输入的内在能力、平衡探索与帕累托前沿开发的精细建模选择,以及处理多重(冲突)目标的能力,在基准测试中表现优异。本文通过将LLMs作为替代模型和候选采样器,融入结构化的分层搜索策略,从而弥合了这一差距。通过将输入空间自适应地划分为不相交的超矩形区域,并采用复合评分函数对其进行排序,我们将LLM的生成过程限制在特定且具有高潜力的子空间内。这使得问题更易于求解,因为LLM无需推理问题的全局结构,而仅需进行局部推理。我们证明,在标准正则性假设下,该算法生成的候选解在豪斯多夫距离意义下收敛于真实帕累托集。实证研究表明,该算法在合成与真实世界基准测试中持续优于基于全局LLM的多目标优化器,并与标准进化算法及贝叶斯优化算法性能相当。