Modern continuous-time generative models typically induce \emph{V-shaped} flows: each sample travels independently along a nearly straight trajectory from the prior to the data. Although effective, this independent movement overlooks the hierarchical structures that exist in real-world data. To address this, we introduce \emph{Y-shaped generative flows}, a framework in which samples travel together along shared pathways before branching off to target-specific endpoints. Our formulation is theoretically justified, yet remains practical, requiring only minimal modifications to standard velocity-driven models. We implement this through a scalable, neural network-based training objective. Experiments on synthetic, image, and biological datasets demonstrate that our method recovers hierarchy-aware structures, improves distributional metrics over strong flow-based baselines, and reaches targets in fewer steps.
翻译:现代连续时间生成模型通常诱导V形流:每个样本沿着从先验分布到数据分布的近乎直线轨迹独立运动。尽管有效,这种独立运动忽略了现实世界数据中存在的层次结构。为解决这一问题,我们提出Y形生成流框架,在该框架中样本首先沿着共享路径共同运动,随后分叉至特定目标端点。我们的理论推导具有严格依据,同时保持实用性,仅需对标准速度驱动模型进行最小化修改。我们通过可扩展的神经网络训练目标实现该框架。在合成数据、图像数据和生物数据集上的实验表明,本方法能够恢复层次感知结构,在基于流的强基线模型上改进分布度量指标,并以更少步数达到目标。