Amortized inference is the task of training a parametric model, such as a neural network, to approximate a distribution with a given unnormalized density where exact sampling is intractable. When sampling is implemented as a sequential decision-making process, reinforcement learning (RL) methods, such as generative flow networks, can be used to train the sampling policy. Off-policy RL training facilitates the discovery of diverse, high-reward candidates, but existing methods still face challenges in efficient exploration. We propose to use an adaptive training distribution (the Teacher) to guide the training of the primary amortized sampler (the Student) by prioritizing high-loss regions. The Teacher, an auxiliary behavior model, is trained to sample high-error regions of the Student and can generalize across unexplored modes, thereby enhancing mode coverage by providing an efficient training curriculum. We validate the effectiveness of this approach in a synthetic environment designed to present an exploration challenge, two diffusion-based sampling tasks, and four biochemical discovery tasks demonstrating its ability to improve sample efficiency and mode coverage.
翻译:摊销推断是指训练一个参数化模型(如神经网络)来近似一个具有给定非归一化密度且精确采样不可行的分布。当采样被实现为一个序列决策过程时,强化学习方法(例如生成流网络)可用于训练采样策略。离策略强化学习训练有助于发现多样化的高奖励候选样本,但现有方法在高效探索方面仍面临挑战。我们提出使用自适应训练分布(教师)来指导主要摊销采样器(学生)的训练,其方法是优先处理高损失区域。教师作为一个辅助行为模型,被训练来采样学生的高误差区域,并且能够泛化到未探索的模式,从而通过提供高效的训练课程来增强模式覆盖。我们在一个为呈现探索挑战而设计的合成环境、两个基于扩散的采样任务以及四个生物化学发现任务中验证了该方法的有效性,证明了其能够提高采样效率和模式覆盖。