Bayesian active learning relies on the precise quantification of predictive uncertainty to explore unknown function landscapes. While Gaussian process surrogates are the standard for such tasks, an underappreciated fact is that their posterior variance depends on the observed outputs only through the hyperparameters, rendering exploration largely insensitive to the actual measurements. We propose to inject observation-dependent feedback by warping the input space with a learned, monotone reparameterization. This mechanism allows the design policy to expand or compress regions of the input space in response to observed variability, thereby shaping the behavior of variance-based acquisition functions. We demonstrate that while such warps can be trained via marginal likelihood, a novel self-supervised objective yields substantially better performance. Our approach improves sample efficiency across a range of active learning benchmarks, particularly in regimes where non-stationarity challenges traditional methods.
翻译:贝叶斯主动学习依赖于预测不确定性的精确量化以探索未知函数空间。虽然高斯过程代理模型是此类任务的标准方法,但一个未被充分认识的事实是:其后验方差仅通过超参数依赖于观测输出,导致探索过程对实际测量值基本不敏感。我们提出通过学习单调重参数化对输入空间进行扭曲,从而注入观测依赖的反馈机制。该机制允许设计策略根据观测到的变异性扩展或压缩输入空间的特定区域,从而重塑基于方差的获取函数的行为特征。我们证明,虽然此类扭曲可通过边际似然进行训练,但新颖的自监督目标函数能显著提升性能。我们的方法在一系列主动学习基准测试中提高了样本效率,特别是在非平稳性挑战传统方法的场景下表现尤为突出。