Score-based methods are powerful across machine learning, but they face a paradox: theoretically path-independent, yet practically path-dependent. We resolve this by proving that practical training objectives differ from the ideal, ground-truth objective by a crucial, overlooked term: the path variance of the score function. We propose the MinPV (**Min**imum **P**ath **V**ariance) Principle to minimize this path variance. Our key contribution is deriving a closed-form expression for the variance, making optimization tractable. By parameterizing the path with a flexible Kumaraswamy Mixture Model, our method learns data-adaptive, low-variance paths without heuristic manual selection. This principled optimization of the complete objective yields more accurate and stable estimators, establishing new state-of-the-art results on challenging benchmarks and providing a general framework for optimizing score-based interpolation.
翻译:评分方法在机器学习中具有强大能力,但面临一个悖论:理论上路径无关,实践中却路径依赖。我们通过证明实际训练目标与理想真实目标之间存在一个关键且被忽视的差异项——评分函数的路径方差,从而解决了这一矛盾。我们提出最小路径方差(MinPV)原理以最小化该路径方差。我们的核心贡献在于推导出方差的闭式表达式,使优化问题可解。通过采用灵活的Kumaraswamy混合模型对路径进行参数化,我们的方法能够学习数据自适应的低方差路径,而无需启发式人工选择。这种对完整目标的原理性优化产生了更精确稳定的估计器,在具有挑战性的基准测试中取得了新的最优结果,并为优化基于评分的插值提供了通用框架。