Do large language models (LLMs) genuinely understand abstract concepts, or merely manipulate them as statistical patterns? We introduce an abstraction-grounding framework that decomposes conceptual understanding into three capacities: interpretation of abstract concepts (Abstract-Abstract, A-A), grounding of abstractions in concrete events (Abstract-Concrete, A-C), and application of abstract principles to regulate concrete decisions (Concrete-Concrete, C-C). Using human values as a testbed - given their semantic richness and centrality to alignment - we employ probing (detecting value traces in internal activations) and steering (modifying representations to shift behavior). Across six open-source LLMs and ten value dimensions, probing shows that diagnostic probes trained solely on abstract value descriptions reliably detect the same values in concrete event narratives and decision reasoning, demonstrating cross-level transfer. Steering reveals an asymmetry: intervening on value representations causally shifts concrete judgments and decisions (A-C, C-C), yet leaves abstract interpretations unchanged (A-A), suggesting that encoded abstract values function as stable anchors rather than malleable activations. These findings indicate LLMs maintain structured value representations that bridge abstraction and action, providing a mechanistic and operational foundation for building value-driven autonomous AI systems with more transparent, generalizable alignment and control.
翻译:大型语言模型(LLMs)是真正理解抽象概念,还是仅将其作为统计模式进行操纵?我们提出了一种抽象-具身化框架,将概念理解分解为三种能力:抽象概念的解释(抽象-抽象,A-A)、抽象概念在具体事件中的具身化(抽象-具体,A-C),以及应用抽象原则来规范具体决策(具体-具体,C-C)。以人类价值观作为测试平台——鉴于其语义丰富性及其与对齐问题的核心关联——我们采用探测(从内部激活中检测价值观痕迹)和引导(修改表征以改变行为)方法。在六个开源LLM和十个价值维度上的实验表明,仅基于抽象价值描述训练的诊断探针,能够可靠地在具体事件叙述和决策推理中检测到相同的价值观,证明了跨层级的迁移能力。引导实验揭示了一种不对称性:干预价值观表征会因果性地改变具体判断和决策(A-C,C-C),却不会改变抽象解释(A-A),这表明编码的抽象价值观发挥着稳定锚点的作用,而非可塑的激活状态。这些发现表明,LLMs维持着连接抽象与行动的结构化价值观表征,为构建具有更透明、可泛化对齐与控制能力的价值驱动自主AI系统提供了机制性和操作性的基础。