Bayesian optimization (BO) struggles in high dimensions, where Gaussian-process surrogates demand heavy retraining and brittle assumptions, slowing progress on real engineering and design problems. We introduce GIT-BO, a Gradient-Informed BO framework that couples TabPFN v2, a tabular foundation model that performs zero-shot Bayesian inference in context, with an active-subspace mechanism computed from the model's own predictive-mean gradients. This aligns exploration to an intrinsic low-dimensional subspace via a Fisher-information estimate and selects queries with a UCB acquisition, requiring no online retraining. Across 60 problem variants spanning 20 benchmarks-nine scalable synthetic families and ten real-world tasks (e.g., power systems, Rover, MOPTA08, Mazda)-up to 500 dimensions, GIT-BO delivers a stronger performance-time trade-off than state-of-the-art GP-based methods (SAASBO, TuRBO, Vanilla BO, BAxUS), ranking highest in performance and with runtime advantages that grow with dimensionality. Limitations include memory footprint and dependence on the capacity of the underlying TFM.
翻译:贝叶斯优化(BO)在高维场景中面临挑战,其中高斯过程代理模型需要繁重的重新训练和脆弱的假设,这阻碍了其在真实工程与设计问题上的进展。我们提出了GIT-BO,一种梯度引导的贝叶斯优化框架,它将TabPFN v2(一种能在上下文中执行零样本贝叶斯推理的表格基础模型)与一个基于模型自身预测均值梯度计算得到的主动子空间机制相结合。该框架通过费舍尔信息估计将探索过程对齐到一个固有的低维子空间,并使用上置信界(UCB)采集函数选择查询点,无需在线重新训练。在涵盖20个基准测试(包括九个可扩展的合成问题族和十个真实世界任务,如电力系统、Rover、MOPTA08、Mazda)的60个问题变体上,维度高达500维,GIT-BO相比最先进的基于高斯过程的方法(SAASBO、TuRBO、Vanilla BO、BAxUS)提供了更优的性能-时间权衡,在性能上排名最高,且其运行时优势随维度增加而增长。局限性包括内存占用较大以及对底层表格基础模型(TFM)能力的依赖。