Multi-objective optimization is a common problem in practical applications, and multi-objective evolutionary algorithm (MOEA) is considered as one of the effective methods to solve these problems. However, their randomness sometimes prevents algorithms from rapidly converging to global optimization, and the design of their genetic operators often requires complicated manual tuning. To overcome this challenge, this study proposes a new framework that combines a large language model (LLM) with traditional evolutionary algorithms to enhance the algorithm's search capability and generalization performance.In our framework, we employ adaptive and hybrid mechanisms to integrate the LLM with the MOEA, thereby accelerating algorithmic convergence. Specifically, we leverage an auxiliary evaluation function and automated prompt construction within the adaptive mechanism to flexibly adjust the utilization of the LLM, generating high-quality solutions that are further refined and optimized through genetic operators.Concurrently, the hybrid mechanism aims to minimize interaction costs with the LLM as much as possible.
翻译:多目标优化是实际应用中的常见问题,而多目标进化算法被认为是解决此类问题的有效方法之一。然而,其随机性有时会阻碍算法快速收敛至全局最优,且其遗传算子的设计通常需要复杂的手动调参。为克服这一挑战,本研究提出了一种将大型语言模型与传统进化算法相结合的新框架,以增强算法的搜索能力和泛化性能。在我们的框架中,我们采用自适应与混合机制将LLM与MOEA相集成,从而加速算法收敛。具体而言,我们在自适应机制中利用辅助评估函数和自动化提示构建,灵活调整LLM的调用,生成高质量解,并通过遗传算子对这些解进行进一步精炼与优化。同时,混合机制旨在尽可能降低与LLM的交互成本。