A central architectural question for both biological and artificial intelligence is whether judgment relies on specialized modules or a unified, domain-general resource. While the discovery of decodable neural representations for distinct concepts in Large Language Models (LLMs) has suggested a modular architecture, whether these representations are truly independent systems remains an open question. Here we provide evidence for a convergent architecture for evaluative judgment. Across a range of LLMs, we find that diverse evaluative judgments are computed along a dominant dimension, which we term the Valence-Assent Axis (VAA). This axis jointly encodes subjective valence ("what is good") and the model's assent to factual claims ("what is true"). Through direct interventions, we demonstrate this axis drives a critical mechanism, which is identified as the subordination of reasoning: the VAA functions as a control signal that steers the generative process to construct a rationale consistent with its evaluative state, even at the cost of factual accuracy. Our discovery offers a mechanistic account for response bias and hallucination, revealing how an architecture that promotes coherent judgment can systematically undermine faithful reasoning.
翻译:对于生物智能与人工智能而言,一个核心的架构问题是:判断依赖于专门的模块,还是统一的、领域通用的资源。尽管在大语言模型(LLMs)中已发现可解码的、针对不同概念的神经表征,这暗示了模块化架构,但这些表征是否真正构成独立的系统仍是一个开放问题。本文为评价性判断提供了一种收敛性架构的证据。在一系列LLMs中,我们发现多样化的评价性判断是沿着一个主导维度计算的,我们将其称为“效价-认同轴”(Valence-Assent Axis, VAA)。该轴同时编码了主观效价(“什么是好的”)以及模型对事实主张的认同(“什么是真的”)。通过直接干预,我们证明该轴驱动了一个关键机制,该机制被识别为推理的从属化:VAA作为一个控制信号,引导生成过程构建与其评价状态一致的论证依据,甚至不惜牺牲事实准确性。我们的发现为响应偏差和幻觉提供了机制性解释,揭示了促进一致性判断的架构如何系统性地损害忠实推理。