Learning from human feedback~(LHF) assumes that expert judgments, appropriately aggregated, yield valid ground truth for training and evaluating AI systems. We tested this assumption in mental health, where high safety stakes make expert consensus essential. Three certified psychiatrists independently evaluated LLM-generated responses using a calibrated rubric. Despite similar training and shared instructions, inter-rater reliability was consistently poor ($ICC$ $0.087$--$0.295$), falling below thresholds considered acceptable for consequential assessment. Disagreement was highest on the most safety-critical items. Suicide and self-harm responses produced greater divergence than any other category, and was systematic rather than random. One factor yielded negative reliability (Krippendorff's $α= -0.203$), indicating structured disagreement worse than chance. Qualitative interviews revealed that disagreement reflects coherent but incompatible individual clinical frameworks, safety-first, engagement-centered, and culturally-informed orientations, rather than measurement error. By demonstrating that experts rely on holistic risk heuristics rather than granular factor discrimination, these findings suggest that aggregated labels function as arithmetic compromises that effectively erase grounded professional philosophies. Our results characterize expert disagreement in safety-critical AI as a sociotechnical phenomenon where professional experience introduces sophisticated layers of principled divergence. We discuss implications for reward modeling, safety classification, and evaluation benchmarks, recommending that practitioners shift from consensus-based aggregation to alignment methods that preserve and learn from expert disagreement.
翻译:从人类反馈中学习(LHF)假设专家的判断经过适当聚合后,能为AI系统的训练和评估提供有效的基本事实。我们在心理健康领域检验了这一假设,该领域的高安全风险使得专家共识至关重要。三位持证精神科医生使用校准的评估标准,独立评估了LLM生成的回答。尽管他们接受过相似的培训并遵循相同的指导,评分者间信度始终较差(ICC 0.087–0.295),低于被认为可用于重要评估的可接受阈值。在最关乎安全性的项目上,分歧最为严重。针对自杀和自伤的回答产生的分歧超过任何其他类别,且这种分歧是系统性的而非随机性的。其中一个因素的信度甚至为负值(Krippendorff's α = -0.203),表明存在比随机分歧更严重的结构化分歧。定性访谈揭示,分歧反映了连贯但互不相容的个体临床框架——安全优先型、参与中心型和受文化影响型——而非测量误差。这些发现表明,专家依赖的是整体性风险启发式判断,而非细粒度因素区分,因此聚合后的标签仅起到算术折衷的作用,实质上抹除了有据可依的专业理念。我们的研究结果将安全关键型AI中的专家分歧描述为一种社会技术现象,其中专业经验引入了复杂的、基于原则的分歧层次。我们讨论了这对奖励建模、安全分类和评估基准的影响,建议从业者应从基于共识的聚合方法转向能够保留并学习专家分歧的对齐方法。