As large language models (LLMs) become increasingly prevalent in critical applications, the need for interpretable AI has grown. We introduce TokenSHAP, a novel method for interpreting LLMs by attributing importance to individual tokens or substrings within input prompts. This approach adapts Shapley values from cooperative game theory to natural language processing, offering a rigorous framework for understanding how different parts of an input contribute to a model's response. TokenSHAP leverages Monte Carlo sampling for computational efficiency, providing interpretable, quantitative measures of token importance. We demonstrate its efficacy across diverse prompts and LLM architectures, showing consistent improvements over existing baselines in alignment with human judgments, faithfulness to model behavior, and consistency. Our method's ability to capture nuanced interactions between tokens provides valuable insights into LLM behavior, enhancing model transparency, improving prompt engineering, and aiding in the development of more reliable AI systems. TokenSHAP represents a significant step towards the necessary interpretability for responsible AI deployment, contributing to the broader goal of creating more transparent, accountable, and trustworthy AI systems.
翻译:随着大语言模型(LLMs)在关键应用中的日益普及,对可解释人工智能的需求不断增长。本文提出TokenSHAP,一种通过归因输入提示中单个词元或子字符串重要性来解释LLMs的新方法。该方法将合作博弈论中的Shapley值适配到自然语言处理领域,为理解输入的不同部分如何影响模型响应提供了严谨的框架。TokenSHAP利用蒙特卡洛采样实现计算高效性,提供可解释、量化的词元重要性度量。我们在多样化提示和LLM架构上验证了其有效性,结果显示其在人类判断一致性、模型行为忠实度和解释一致性方面均优于现有基线方法。本方法捕捉词元间细微交互的能力为理解LLM行为提供了宝贵洞见,有助于增强模型透明度、改进提示工程,并促进更可靠AI系统的开发。TokenSHAP朝着负责任AI部署所必需的可解释性迈出了重要一步,为实现更透明、可问责和可信赖的AI系统这一宏观目标作出贡献。