Aligning Large Language Models (LLMs) with the diverse spectrum of human values remains a central challenge: preference-based methods often fail to capture deeper motivational principles. Value-based approaches offer a more principled path, yet three gaps persist: extraction often ignores hierarchical structure, evaluation detects presence but not calibrated intensity, and the steerability of LLMs at controlled intensities remains insufficiently understood. To address these limitations, we introduce VALUEFLOW, the first unified framework that spans extraction, evaluation, and steering with calibrated intensity control. The framework integrates three components: (i) HIVES, a hierarchical value embedding space that captures intra- and cross-theory value structure; (ii) the Value Intensity DataBase (VIDB), a large-scale resource of value-labeled texts with intensity estimates derived from ranking-based aggregation; and (iii) an anchor-based evaluator that produces consistent intensity scores for model outputs by ranking them against VIDB panels. Using VALUEFLOW, we conduct a comprehensive large-scale study across ten models and four value theories, identifying asymmetries in steerability and composition laws for multi-value control. This paper establishes a scalable infrastructure for evaluating and controlling value intensity, advancing pluralistic alignment of LLMs.
翻译:将大语言模型(LLMs)与人类多元价值谱系对齐仍是一个核心挑战:基于偏好的方法往往难以捕捉深层的动机原则。基于价值的方法提供了一条更具原则性的路径,但存在三个持续性问题:价值提取常忽略层次结构,价值评估仅能检测存在性而无法量化校准后的强度,以及对LLMs在可控强度下的可调控性仍缺乏充分理解。为应对这些局限,我们提出了VALUEFLOW——首个涵盖提取、评估及具有校准强度控制的可调控性的统一框架。该框架整合了三个组件:(i)HIVES,一个能捕捉理论内与跨理论价值结构的层次化价值嵌入空间;(ii)价值强度数据库(VIDB),一个大规模的价值标注文本资源,其强度估计通过基于排序的聚合方法得出;(iii)基于锚点的评估器,通过将模型输出与VIDB中的参考组进行排序比较,生成一致的强度评分。利用VALUEFLOW,我们在十个模型和四种价值理论中进行了全面的大规模研究,识别了可调控性的不对称性以及多价值控制的组合规律。本文建立了一个可扩展的评估与控制价值强度的基础设施,推动了LLMs的多元化对齐研究。