In many-task optimization scenarios, surrogate models are valuable for mitigating the computational burden of repeated fitness evaluations across tasks. This study proposes a novel meta-surrogate framework to assist many-task optimization, by leveraging the knowledge transfer strengths and emergent capabilities of large language models (LLMs). We formulate a unified framework for many-task fitness prediction, by defining a universal model with metadata to fit a group of problems. Fitness prediction is performed on metadata and decision variables, enabling efficient knowledge sharing across tasks and adaptability to new tasks. The LLM-based meta-surrogate treats fitness prediction as conditional probability estimation, employing a unified token sequence representation for task metadata, inputs, and outputs. This approach facilitates efficient inter-task knowledge sharing through shared token embeddings and captures complex task dependencies via multi-task model training. Experimental results demonstrate the model's emergent generalization ability, including zero-shot performance on problems with unseen dimensions. When integrated into evolutionary transfer optimization (ETO), our framework supports dual-level knowledge transfer -- at both the surrogate and individual levels -- enhancing optimization efficiency and robustness. This work establishes a novel foundation for applying LLMs in surrogate modeling, offering a versatile solution for many-task optimization.
翻译:在多任务优化场景中,代理模型对于减轻跨任务重复适应度评估的计算负担具有重要价值。本研究提出了一种新颖的元代理框架,通过利用大型语言模型(LLMs)的知识迁移优势与涌现能力,以辅助多任务优化。我们构建了一个统一的多任务适应度预测框架,通过定义包含元数据的通用模型来拟合一组问题。适应度预测基于元数据与决策变量执行,实现了跨任务的高效知识共享以及对新任务的适应能力。基于LLM的元代理将适应度预测视为条件概率估计,采用统一的令牌序列表示方法来处理任务元数据、输入与输出。该方法通过共享令牌嵌入促进高效的跨任务知识共享,并借助多任务模型训练捕捉复杂的任务依赖关系。实验结果表明该模型具备涌现的泛化能力,包括在未见维度问题上的零样本性能。当集成至进化迁移优化(ETO)框架时,本框架支持代理层与个体层双层级的知识迁移,从而提升优化效率与鲁棒性。本研究为LLM在代理建模中的应用奠定了新基础,为多任务优化提供了通用解决方案。