Large language models (LLMs) are increasingly used to simulate decision-making tasks involving personal data sharing, where privacy concerns and prosocial motivations can push choices in opposite directions. Existing evaluations often measure privacy-related attitudes or sharing intentions in isolation, which makes it difficult to determine whether a model's expressed values jointly predict its downstream data-sharing actions as in real human behaviors. We introduce a context-based assessment protocol that sequentially administers standardized questionnaires for privacy attitudes, prosocialness, and acceptance of data sharing within a bounded, history-carrying session. To evaluate value-action alignments under competing attitudes, we use multi-group structural equation modeling (MGSEM) to identify relations from privacy concerns and prosocialness to data sharing. We propose Value-Action Alignment Rate (VAAR), a human-referenced directional agreement metric that aggregates path-level evidence for expected signs. Across multiple LLMs, we observe stable but model-specific Privacy-PSA-AoDS profiles, and substantial heterogeneity in value-action alignment.
翻译:大型语言模型(LLMs)越来越多地用于模拟涉及个人数据共享的决策任务,其中隐私关切与亲社会动机可能将选择推向相反方向。现有评估通常孤立地测量与隐私相关的态度或共享意愿,这使得难以确定模型所表达的价值是否如真实人类行为那样共同预测其下游数据共享行动。我们引入一种基于情境的评估协议,该协议在有限且携带历史记录的会话中,依次施测关于隐私态度、亲社会性和数据共享接受度的标准化问卷。为评估竞争态度下的价值-行动对齐,我们采用多组结构方程模型(MGSEM)来识别从隐私关切和亲社会性到数据共享的关系。我们提出价值-行动对齐率(VAAR),这是一种以人类为参照的方向一致性度量,可聚合路径层面对预期符号的证据。在多个LLMs中,我们观察到稳定但模型特定的隐私-PSA-AoDS特征剖面,以及价值-行动对齐的显著异质性。