Many training data attribution (TDA) methods aim to estimate how a model's behavior would change if one or more data points were removed from the training set. Methods based on implicit differentiation, such as influence functions, can be made computationally efficient, but fail to account for underspecification, the implicit bias of the optimization algorithm, or multi-stage training pipelines. By contrast, methods based on unrolling address these issues but face scalability challenges. In this work, we connect the implicit-differentiation-based and unrolling-based approaches and combine their benefits by introducing Source, an approximate unrolling-based TDA method that is computed using an influence-function-like formula. While being computationally efficient compared to unrolling-based approaches, Source is suitable in cases where implicit-differentiation-based approaches struggle, such as in non-converged models and multi-stage training pipelines. Empirically, Source outperforms existing TDA techniques in counterfactual prediction, especially in settings where implicit-differentiation-based approaches fall short.
翻译:许多训练数据归因(TDA)方法旨在估计,若从训练集中移除一个或多个数据点,模型行为将如何变化。基于隐式微分的方法(如影响函数)虽可高效计算,但无法处理欠规范性问题、优化算法的隐式偏差或多阶段训练流程。相反,基于展开的方法虽可解决上述问题,却面临可扩展性挑战。本研究通过引入Source方法,将基于隐式微分与基于展开的方法相联系,并融合两者优势——这是一种采用类影响函数公式计算的近似展开TDA方法。相较于基于展开的方法,Source在保持计算高效性的同时,适用于隐式微分方法难以处理的场景,例如未收敛模型及多阶段训练流程。实验表明,在反事实预测任务中,Source性能优于现有TDA技术,尤其在隐式微分方法表现欠佳的场景中优势更为显著。