Large Language Models (LLMs) have recently emerged as effective surrogate models and candidate generators within global optimization frameworks for expensive blackbox functions. Despite promising results, LLM-based methods often struggle in high-dimensional search spaces or when lacking domain-specific priors, leading to sparse or uninformative suggestions. To overcome these limitations, we propose HOLLM, a novel global optimization algorithm that enhances LLM-driven sampling by partitioning the search space into promising subregions. Each subregion acts as a ``meta-arm'' selected via a bandit-inspired scoring mechanism that effectively balances exploration and exploitation. Within each selected subregion, an LLM then proposes high-quality candidate points, without any explicit domain knowledge. Empirical evaluation on standard optimization benchmarks shows that HOLLM consistently matches or surpasses leading global optimization methods, while substantially outperforming global LLM-based sampling strategies.
翻译:大型语言模型(LLM)近期在昂贵黑箱函数的全局优化框架中,已成为有效的代理模型与候选解生成器。尽管成果显著,但基于LLM的方法在高维搜索空间或缺乏领域先验知识时,往往难以生成有效建议,导致推荐结果稀疏或信息量不足。为克服这些局限,本文提出HOLLM——一种通过将搜索空间划分为潜力子区域来增强LLM驱动采样的新型全局优化算法。每个子区域作为通过多臂赌博机启发的评分机制选择的“元臂”,该机制有效平衡探索与利用。在每个选定的子区域内,LLM无需显式领域知识即可生成高质量候选点。在标准优化基准测试上的实证评估表明,HOLLM在显著优于全局LLM采样策略的同时,始终匹配或超越主流全局优化方法。