Training data attribution (TDA) identifies which training examples most influenced a model's prediction. The best-performing TDA methods exploits gradients to define an influence function. To overcome the scalability challenge arising from gradient computation, the most popular strategy is random projection (e.g., TRAK, LoGRA). However, this still faces two bottlenecks when scaling to large training sets and high-quality attribution: \emph{(i)} storing and loading projected per-example gradients for all $N$ training examples, where query latency is dominated by I/O; and \emph{(ii)} forming the $D \times D$ inverse Hessian approximation, which costs $O(D^2)$ memory. Both bottlenecks scale with the projection dimension $D$, yet increasing $D$ is necessary for attribution quality -- creating a quality-scalability tradeoff. We introduce \textbf{LoRIF (Low-Rank Influence Functions)}, which exploits low-rank structures of gradient to address both bottlenecks. First, we store rank-$c$ factors of the projected per-example gradients rather than full matrices, reducing storage and query-time I/O from $O(D)$ to $O(c\sqrt{D})$ per layer per sample. Second, we use truncated SVD with the Woodbury identity to approximate the Hessian term in an $r$-dimensional subspace, reducing memory from $O(D^2)$ to $O(Dr)$. On models from 0.1B to 70B parameters trained on datasets with millions of examples, LoRIF achieves up to 20$\times$ storage reduction and query-time speedup compared to LoGRA, while matching or exceeding its attribution quality. LoRIF makes gradient-based TDA practical at frontier scale.
翻译:训练数据归因(TDA)旨在识别哪些训练样本对模型预测产生了最大影响。性能最佳的TDA方法利用梯度来定义影响函数。为克服梯度计算带来的可扩展性挑战,最流行的策略是随机投影(例如TRAK、LoGRA)。然而,当扩展到大规模训练集和高质量归因时,该方法仍面临两个瓶颈:\emph{(i)} 存储和加载所有$N$个训练样本的投影后逐样本梯度,其中查询延迟主要由I/O决定;\emph{(ii)} 构建$D \times D$逆海森矩阵近似,其内存开销为$O(D^2)$。这两个瓶颈均随投影维度$D$增长,但提升$D$对归因质量至关重要——由此形成了质量与可扩展性的权衡。本文提出\textbf{LoRIF(低秩影响函数)},利用梯度的低秩结构同时解决这两个瓶颈。首先,我们存储投影后逐样本梯度的秩-$c$因子而非完整矩阵,将每层每样本的存储与查询时I/O开销从$O(D)$降至$O(c\sqrt{D})$。其次,我们采用截断SVD结合Woodbury恒等式在$r$维子空间中近似海森矩阵项,将内存开销从$O(D^2)$降至$O(Dr)$。在基于数百万样本数据集训练的参数量从0.1B到70B的模型上,LoRIF相比LoGRA实现了高达20倍的存储压缩与查询加速,同时达到或超越了其归因质量。LoRIF使得基于梯度的TDA在前沿规模应用中具备实用性。