Combinatorial optimization problems often rely on heuristic algorithms to generate efficient solutions. However, the manual design of heuristics is resource-intensive and constrained by the designer's expertise. Recent advances in artificial intelligence, particularly large language models (LLMs), have demonstrated the potential to automate heuristic generation through evolutionary frameworks. Recent works focus only on well-known combinatorial optimization problems like the traveling salesman problem and online bin packing problem when designing constructive heuristics. This study investigates whether LLMs can effectively generate heuristics for niche, not yet broadly researched optimization problems, using the unit-load pre-marshalling problem as an example case. We propose the Contextual Evolution of Heuristics (CEoH) framework, an extension of the Evolution of Heuristics (EoH) framework, which incorporates problem-specific descriptions to enhance in-context learning during heuristic generation. Through computational experiments, we evaluate CEoH and EoH and compare the results. Results indicate that CEoH enables smaller LLMs to generate high-quality heuristics more consistently and even outperform larger models. Larger models demonstrate robust performance with or without contextualized prompts. The generated heuristics exhibit scalability to diverse instance configurations.
翻译:组合优化问题通常依赖启发式算法来生成高效解。然而,手动设计启发式算法不仅资源密集,还受限于设计者的专业知识。人工智能的最新进展,特别是大型语言模型,已展现出通过进化框架实现启发式算法自动生成的潜力。现有研究在设计构造性启发式算法时,主要聚焦于旅行商问题、在线装箱问题等经典组合优化问题。本研究以单元负载预混堆问题为例,探究大型语言模型能否有效为尚未被广泛研究的细分优化问题生成启发式算法。我们提出了上下文启发式进化框架,作为启发式进化框架的扩展,该框架通过融入问题特定描述来增强启发式生成过程中的上下文学习能力。通过计算实验,我们对上下文启发式进化框架和启发式进化框架进行了评估与比较。结果表明,上下文启发式进化框架能使较小规模的大型语言模型更稳定地生成高质量启发式算法,其表现甚至可能超越更大规模的模型。而大规模模型无论是否使用上下文提示均展现出稳健性能。所生成的启发式算法对不同实例配置表现出良好的可扩展性。