Generative Flow Networks (GFlowNets) are a flexible family of amortized samplers trained to generate discrete and compositional objects with probability proportional to a reward function. However, learning efficiency is constrained by the model's ability to rapidly explore diverse high-probability regions during training. To mitigate this issue, recent works have focused on incentivizing the exploration of unvisited and valuable states via curiosity-driven search and self-supervised random network distillation, which tend to waste samples on already well-approximated regions of the state space. In this context, we propose Adaptive Complementary Exploration (ACE), a principled algorithm for the effective exploration of novel and high-probability regions when learning GFlowNets. To achieve this, ACE introduces an exploration GFlowNet explicitly trained to search for high-reward states in regions underexplored by the canonical GFlowNet, which learns to sample from the target distribution. Through extensive experiments, we show that ACE significantly improves upon prior work in terms of approximation accuracy to the target distribution and discovery rate of diverse high-reward states.
翻译:生成流网络(GFlowNets)是一类灵活的摊销采样器,经过训练后可生成离散且具有组合结构的对象,其生成概率与奖励函数成正比。然而,其学习效率受限于模型在训练期间快速探索多样化高概率区域的能力。为缓解此问题,近期研究聚焦于通过好奇心驱动搜索与自监督随机网络蒸馏来激励对未访问且有价值状态的探索,但这类方法往往在状态空间中已充分近似的区域浪费采样资源。在此背景下,我们提出自适应互补探索(ACE),这是一种在学习GFlowNets时有效探索新颖且高概率区域的原则性算法。为实现这一目标,ACE引入一个显式训练的探索型GFlowNet,专门用于在标准GFlowNet(其学习从目标分布中采样)未充分探索的区域中搜索高奖励状态。通过大量实验,我们证明ACE在目标分布的近似精度以及多样化高奖励状态的发现率方面,均显著优于现有方法。