Accurate trajectory prediction is crucial for autonomous driving, yet uncertainty in agent behavior and perception noise makes it inherently challenging. While multi-modal trajectory prediction models generate multiple plausible future paths with associated probabilities, effectively quantifying uncertainty remains an open problem. In this work, we propose a novel multi-modal trajectory prediction approach based on evidential deep learning that estimates both positional and mode probability uncertainty in real time. Our approach leverages a Normal Inverse Gamma distribution for positional uncertainty and a Dirichlet distribution for mode uncertainty. Unlike sampling-based methods, it infers both types of uncertainty in a single forward pass, significantly improving efficiency. Additionally, we experimented with uncertainty-driven importance sampling to improve training efficiency by prioritizing underrepresented high-uncertainty samples over redundant ones. We perform extensive evaluations of our method on the Argoverse 1 and Argoverse 2 datasets, demonstrating that it provides reliable uncertainty estimates while maintaining high trajectory prediction accuracy.
翻译:精确的轨迹预测对于自动驾驶至关重要,然而智能体行为的不确定性与感知噪声使其本质上具有挑战性。尽管多模态轨迹预测模型能够生成多个具有关联概率的合理未来路径,但有效量化不确定性仍是一个开放性问题。本研究提出一种基于证据深度学习的新型多模态轨迹预测方法,可实时估计位置不确定性和模态概率不确定性。该方法利用正态逆伽马分布刻画位置不确定性,采用狄利克雷分布表征模态不确定性。与基于采样的方法不同,本方法仅需单次前向传播即可推断两类不确定性,显著提升了计算效率。此外,我们尝试采用不确定性驱动的重要性采样机制,通过优先处理代表性不足的高不确定性样本而非冗余样本,从而提升训练效率。我们在Argoverse 1和Argoverse 2数据集上对本方法进行了全面评估,结果表明该方法在保持高精度轨迹预测性能的同时,能够提供可靠的不确定性估计。