Large Language Models (LLMs) have already been widely adopted for automated algorithm design, demonstrating strong abilities in generating and evolving algorithms across various fields. Existing work has largely focused on examining their effectiveness in solving specific problems, with search strategies primarily guided by adaptive prompt designs. In this paper, through investigating the token-wise attribution of the prompts to LLM-generated algorithmic codes, we show that providing high-quality algorithmic code examples can substantially improve the performance of the LLM-driven optimization. Building upon this insight, we propose leveraging prior benchmark algorithms to guide LLM-driven optimization and demonstrate superior performance on two black-box optimization benchmarks: the pseudo-Boolean optimization suite (pbo) and the black-box optimization suite (bbob). Our findings highlight the value of integrating benchmarking studies to enhance both efficiency and robustness of the LLM-driven black-box optimization methods.
翻译:大语言模型(LLMs)已在自动化算法设计领域得到广泛应用,展现出在多个领域生成和演化算法的强大能力。现有研究主要关注其在解决特定问题上的有效性,其搜索策略主要通过自适应提示设计进行引导。本文通过探究提示对LLM生成算法代码的token级归因,发现提供高质量算法代码示例能显著提升LLM驱动优化的性能。基于这一洞见,我们提出利用先验基准算法来引导LLM驱动优化,并在两个黑盒优化基准测试集上展示了优越性能:伪布尔优化套件(pbo)和黑盒优化套件(bbob)。我们的研究结果凸显了整合基准测试研究对于提升LLM驱动黑盒优化方法的效率和鲁棒性的重要价值。