As Large Language Models (LLMs) scale to million-token contexts, traditional Mechanistic Interpretability techniques for analyzing attention scale quadratically with context length, demanding terabytes of memory beyond 100,000 tokens. We introduce Sparse Tracing, a novel technique that leverages dynamic sparse attention to efficiently analyze long context attention patterns. We present Stream, a compilable hierarchical pruning algorithm that estimates per-head sparse attention masks in near-linear time $O(T \log T)$ and linear space $O(T)$, enabling one-pass interpretability at scale. Stream performs a binary-search-style refinement to retain only the top-$k$ key blocks per query while preserving the model's next-token behavior. We apply Stream to long chain-of-thought reasoning traces and identify thought anchors while pruning 97-99\% of token interactions. On the RULER benchmark, Stream preserves critical retrieval paths while discarding 90-96\% of interactions and exposes layer-wise routes from the needle to output. Our method offers a practical drop-in tool for analyzing attention patterns and tracing information flow without terabytes of caches. By making long context interpretability feasible on consumer GPUs, Sparse Tracing helps democratize chain-of-thought monitoring. Code is available at https://anonymous.4open.science/r/stream-03B8/.
翻译:随着大语言模型(LLMs)扩展到百万令牌上下文,传统的用于分析注意力的机制可解释性技术随上下文长度呈二次方规模增长,在超过10万个令牌时需要太字节级内存。我们引入稀疏追踪技术,这是一种利用动态稀疏注意力高效分析长上下文注意力模式的新方法。我们提出Stream,一种可编译的分层剪枝算法,能以近线性时间$O(T \log T)$和线性空间$O(T)$估计每个注意力头的稀疏注意力掩码,从而实现大规模单次可解释性分析。Stream通过二分搜索式优化,仅保留每个查询对应的前$k$个关键块,同时保持模型的下一个令牌生成行为。我们将Stream应用于长链思维推理轨迹,在剪除97-99%的令牌交互的同时识别思维锚点。在RULER基准测试中,Stream在丢弃90-96%交互的同时保留了关键检索路径,并揭示了从关键信息到输出的分层传递路径。我们的方法为分析注意力模式和追踪信息流提供了一个无需太字节缓存的即插即用工具。通过使长上下文可解释性在消费级GPU上成为可能,稀疏追踪技术有助于推动链式思维监控的普及。代码发布于 https://anonymous.4open.science/r/stream-03B8/。