Gradient-based methods for instance-based explanation for large language models (LLMs) are hindered by the immense dimensionality of model gradients. In practice, influence estimation is restricted to a subset of model parameters to make computation tractable, but this subset is often chosen ad hoc and rarely justified by systematic evaluation. This paper investigates if it is better to create low-dimensional representations by selecting a small, architecturally informed subset of model components or by projecting the full gradients into a lower-dimensional space. Using a novel benchmark, we show that a greedily selected subset of components captures the information about training data influence needed for a retrieval task more effectively than either the full gradient or random projection. We further find that this approach is more computationally efficient than random projection, demonstrating that targeted component selection is a practical strategy for making instance-based explanations of large models more computationally feasible.
翻译:基于梯度的大语言模型实例解释方法受限于模型梯度的巨大维度。实践中,影响估计通常被限制在模型参数的子集中以保证计算可行性,但该子集的选择往往具有随意性,且很少通过系统评估进行验证。本文研究通过选择架构感知的小规模模型组件子集,或将完整梯度投影到低维空间,哪种方式能更有效地构建低维表示。通过新颖的基准测试,我们证明贪婪选择的组件子集在检索任务所需训练数据影响信息捕获方面,比完整梯度或随机投影方法更有效。进一步研究发现,该方法比随机投影具有更高的计算效率,表明定向组件选择是实现大规模模型实例解释计算可行性的实用策略。