People increasingly seek advice online from both human peers and large language model (LLM)-based chatbots. Such advice rarely involves identifying a single correct answer; instead, it typically requires navigating trade-offs among competing values. We aim to characterize how LLMs navigate value trade-offs across different advice-seeking contexts. First, we examine the value trade-off structure underlying advice seeking using a curated dataset from four advice-oriented subreddits. Using a bottom-up approach, we inductively construct a hierarchical value framework by aggregating fine-grained values extracted from individual advice options into higher-level value categories. We construct value co-occurrence networks to characterize how values co-occur within dilemmas and find substantial heterogeneity in value trade-off structures across advice-seeking contexts: a women-focused subreddit exhibits the highest network density, indicating more complex value conflicts; women's, men's, and friendship-related subreddits exhibit highly correlated value-conflict patterns centered on security-related tensions (security vs. respect/connection/commitment); by contrast, career advice forms a distinct structure where security frequently clashes with self-actualization and growth. We then evaluate LLM value preferences against these dilemmas and find that, across models and contexts, LLMs consistently prioritize values related to Exploration & Growth over Benevolence & Connection. This systemically skewed value orientation highlights a potential risk of value homogenization in AI-mediated advice, raising concerns about how such systems may shape decision-making and normative outcomes at scale.
翻译:人们越来越多地在线上同时向人类同伴和基于大型语言模型(LLM)的聊天机器人寻求建议。此类建议很少涉及确定单一正确答案;相反,它通常需要在相互竞争的价值之间进行权衡。我们的目标是刻画LLM在不同寻求建议的语境中如何驾驭价值权衡。首先,我们使用来自四个以建议为导向的Reddit子版块的精选数据集,考察了寻求建议行为背后的价值权衡结构。采用自下而上的方法,我们通过将从单个建议选项中提取的细粒度价值聚合为更高层级的价值类别,归纳性地构建了一个层级化价值框架。我们构建了价值共现网络,以刻画价值如何在困境中共现,并发现不同寻求建议语境中的价值权衡结构存在显著异质性:一个女性聚焦的子版块表现出最高的网络密度,表明存在更复杂的价值冲突;女性、男性和友谊相关的子版块呈现出高度相关的价值冲突模式,这些模式以安全相关的张力(安全 vs. 尊重/连接/承诺)为中心;相比之下,职业建议形成了一个独特的结构,其中安全经常与自我实现和成长发生冲突。随后,我们针对这些困境评估了LLM的价值偏好,发现跨模型和跨语境,LLM始终优先考虑与“探索与成长”相关的价值,而非“仁慈与连接”。这种系统性的价值取向偏斜凸显了AI中介建议中价值同质化的潜在风险,引发了对此类系统可能如何大规模塑造决策和规范性结果的担忧。