Learning from human feedback~(LHF) assumes that expert judgments, appropriately aggregated, yield valid ground truth for training and evaluating AI systems. We tested this assumption in mental health, where high safety stakes make expert consensus essential. Three certified psychiatrists independently evaluated LLM-generated responses using a calibrated rubric. Despite similar training and shared instructions, inter-rater reliability was consistently poor ($ICC$ $0.087$--$0.295$), falling below thresholds considered acceptable for consequential assessment. Disagreement was highest on the most safety-critical items. Suicide and self-harm responses produced greater divergence than any other category, and was systematic rather than random. One factor yielded negative reliability (Krippendorff's $α= -0.203$), indicating structured disagreement worse than chance. Qualitative interviews revealed that disagreement reflects coherent but incompatible individual clinical frameworks, safety-first, engagement-centered, and culturally-informed orientations, rather than measurement error. By demonstrating that experts rely on holistic risk heuristics rather than granular factor discrimination, these findings suggest that aggregated labels function as arithmetic compromises that effectively erase grounded professional philosophies. Our results characterize expert disagreement in safety-critical AI as a sociotechnical phenomenon where professional experience introduces sophisticated layers of principled divergence. We discuss implications for reward modeling, safety classification, and evaluation benchmarks, recommending that practitioners shift from consensus-based aggregation to alignment methods that preserve and learn from expert disagreement.
翻译:从人类反馈中学习(LHF)假设专家的判断经过适当聚合后,能为AI系统的训练和评估提供有效的基本事实。我们在心理健康领域检验了这一假设,该领域的高安全风险使得专家共识至关重要。三位持证精神科医生使用校准的评估标准,独立评估了LLM生成的回答。尽管培训背景相似且指导说明一致,评分者间信度始终较低($ICC$ $0.087$--$0.295$),低于被认为可用于重要评估的可接受阈值。在最关乎安全性的项目上,分歧最为显著。涉及自杀与自伤的回答产生的分歧超过任何其他类别,且这种分歧是系统性的而非随机性的。一个评估维度甚至呈现负信度(Krippendorff's $α= -0.203$),表明存在比随机情况更糟糕的结构性分歧。定性访谈揭示,分歧反映了连贯但互不相容的个人临床框架——安全优先型、参与中心型和受文化影响的导向——而非测量误差。通过证明专家依赖整体性风险启发式判断而非细粒度因素区分,这些发现表明,聚合后的标签仅作为算术折衷,实质上抹去了有根基的专业理念。我们的研究将安全关键型AI中的专家分歧描述为一种社会技术现象,其中专业经验引入了复杂的、基于原则的差异层次。我们讨论了这对奖励建模、安全分类和评估基准的影响,建议从业者应从基于共识的聚合方法转向能够保留并学习专家分歧的对齐方法。