We introduce the Deep Value Benchmark (DVB), an evaluation framework that directly tests whether large language models (LLMs) learn fundamental human values or merely surface-level preferences. This distinction is critical for AI alignment: Systems that capture deeper values are likely to generalize human intentions robustly, while those that capture only superficial patterns in preference data risk producing misaligned behavior. The DVB uses a novel experimental design with controlled confounding between deep values (e.g., moral principles) and shallow features (e.g., superficial attributes). In the training phase, we expose LLMs to human preference data with deliberately correlated deep and shallow features -- for instance, where a user consistently prefers (non-maleficence, formal language) options over (justice, informal language) alternatives. The testing phase then breaks these correlations, presenting choices between (justice, formal language) and (non-maleficence, informal language) options. This design allows us to precisely measure a model's Deep Value Generalization Rate (DVGR) -- the probability of generalizing based on the underlying value rather than the shallow feature. Across 9 different models, the average DVGR is just 0.30. All models generalize deep values less than chance. Larger models have a (slightly) lower DVGR than smaller models. We are releasing our dataset, which was subject to three separate human validation experiments. DVB provides an interpretable measure of a core feature of alignment.
翻译:本文提出深度价值基准(DVB),这是一个直接测试大型语言模型(LLM)是否习得根本人类价值或仅掌握表层偏好的评估框架。该区分对AI对齐至关重要:捕捉深层价值的系统有望稳健地泛化人类意图,而仅捕捉偏好数据中表层模式的系统则可能产生未对齐行为。DVB采用新颖的实验设计,在深层价值(如道德原则)与浅层特征(如表面属性)之间建立受控混杂关联。在训练阶段,我们将LLM暴露于深层与浅层特征刻意相关的人类偏好数据——例如用户始终选择(非伤害原则,正式语言)选项而非(正义原则,非正式语言)替代项。测试阶段则打破这些关联,呈现(正义原则,正式语言)与(非伤害原则,非正式语言)选项之间的选择。该设计使我们能精确测量模型的深度价值泛化率(DVGR)——即基于底层价值而非浅层特征进行泛化的概率。在9个不同模型中,平均DVGR仅为0.30。所有模型的深层价值泛化能力均低于随机水平。较大模型的DVGR略低于较小模型。我们发布了经过三项独立人工验证实验的数据集。DVB为核心对齐特征提供了可解释的度量标准。