Undeniably, Large Language Models (LLMs) have stirred an extraordinary wave of innovation in the machine learning research domain, resulting in substantial impact across diverse fields such as reinforcement learning, robotics, and computer vision. Their incorporation has been rapid and transformative, marking a significant paradigm shift in the field of machine learning research. However, the field of experimental design, grounded on black-box optimization, has been much less affected by such a paradigm shift, even though integrating LLMs with optimization presents a unique landscape ripe for exploration. In this position paper, we frame the field of black-box optimization around sequence-based foundation models and organize their relationship with previous literature. We discuss the most promising ways foundational language models can revolutionize optimization, which include harnessing the vast wealth of information encapsulated in free-form text to enrich task comprehension, utilizing highly flexible sequence models such as Transformers to engineer superior optimization strategies, and enhancing performance prediction over previously unseen search spaces.
翻译:不可否认,大语言模型(LLMs)已在机器学习研究领域掀起了一波非凡的创新浪潮,对强化学习、机器人技术和计算机视觉等多个领域产生了深远影响。其融合速度之快、变革性之强,标志着机器学习研究领域发生了重大范式转变。然而,基于黑箱优化的实验设计领域受此类范式转变的影响要小得多,尽管将LLMs与优化相结合展现了一个亟待探索的独特前景。在这篇立场论文中,我们围绕基于序列的基础模型构建了黑箱优化领域,并梳理了它们与先前文献的关系。我们探讨了基础语言模型能够革新优化的最具前景的方式,包括利用自由形式文本中所蕴含的丰富信息来深化任务理解、采用如Transformer这类高度灵活的序列模型来设计更优的优化策略,以及提升在未见过的搜索空间上的性能预测能力。