Intermediate-layer predictions in large language models (LLMs) are informative but hard to decode accurately, especially at early layers. Existing lens-style methods typically rely on direct linear readout, which is simple but often drifts away from the model's eventual prediction. We proposeSimLens, a simple training-free decoder for single-token decision tasks that keeps only the start token and a candidate answer token ([s] and [a]) and performs one lightweight continuation through the remaining upper layers. This surprisingly small modification recovers much more accurate latent predictions than direct linear decoding. We further introduce Linear SimLens, a lightweight linear approximation for entropy-based confidence estimation, and combine the two in SimExit, a hybrid early-exit mechanism. On ARC, BoolQ, and HeadQA with LLaMA-7B and Vicuna-7B, SimLens improves Iso-Compute accuracy in all six settings, with an average gain of +0.43 even when fair compute includes the extra two-token post-forward overhead. SimExit yields an average 1.15$\times$ speedup at the best-accuracy operating points and 1.40$\times$ when allowing up to a 1 percentage-point accuracy drop. Ablations show that [s] and [a] play distinct roles as global condition and semantic anchor, respectively.
翻译:大语言模型(LLM)的中间层预测信息丰富但难以准确解码,尤其在浅层网络中。现有透镜式方法通常依赖直接线性读出,虽简单易行但常偏离模型的最终预测。本文提出SimLens——一种无需训练的单令牌决策解码器,仅保留起始令牌与候选答案令牌([s]和[a]),并通过剩余上层网络执行一次轻量级前向传播。这一微小改进竟能比直接线性解码获得更精确的潜在预测。我们进一步提出Linear SimLens,一种基于熵的置信度估计的轻量线性近似方法,并将二者整合为SimExit混合早期退出机制。在LLaMA-7B和Vicuna-7B模型上对ARC、BoolQ及HeadQA数据集的实验表明:SimLens在所有六种设置中均提升了等计算量精度,即使计入额外双令牌前向开销,平均增益仍达+0.43。SimExit在最佳精度操作点实现平均1.15倍加速,在允许精度下降1个百分点时加速比达1.40倍。消融实验揭示[s]和[a]分别作为全局条件与语义锚点发挥不同作用。