Human trajectory forecasting requires capturing the multimodal nature of pedestrian behavior. However, existing approaches suffer from prior misalignment. Their learned or fixed priors often fail to capture the full distribution of plausible futures, limiting both prediction accuracy and diversity. We theoretically establish that prediction error is lower-bounded by prior quality, making prior modeling a key performance bottleneck. Guided by this insight, we propose AGMA (Adaptive Gaussian Mixture Anchors), which constructs expressive priors through two stages: extracting diverse behavioral patterns from training data and distilling them into a scene-adaptive global prior for inference. Extensive experiments on ETH-UCY, Stanford Drone, and JRDB datasets demonstrate that AGMA achieves state-of-the-art performance, confirming the critical role of high-quality priors in trajectory forecasting.
翻译:人体轨迹预测需要捕捉行人行为的多模态特性。然而,现有方法存在先验失准问题。其学习或固定的先验往往无法捕捉合理未来轨迹的完整分布,从而限制了预测的准确性与多样性。我们从理论上证明预测误差受限于先验质量的下界,使得先验建模成为性能的关键瓶颈。基于这一洞见,我们提出AGMA(自适应高斯混合锚点),通过两个阶段构建表达力强的先验:从训练数据中提取多样化的行为模式,并将其提炼为场景自适应的全局先验用于推理。在ETH-UCY、斯坦福无人机和JRDB数据集上的大量实验表明,AGMA实现了最先进的性能,证实了高质量先验在轨迹预测中的关键作用。