As LLMs are deployed in knowledge-intensive settings (e.g., surgery, astronomy, therapy), users expect not just answers, but also meaningful explanations for those answers. In these settings, users are often domain experts (e.g., doctors, astrophysicists, psychologists) who require explanations that reflect expert-level reasoning. However, current evaluation schemes primarily emphasize plausibility or internal faithfulness of the explanation, which fail to capture whether the content of the explanation truly aligns with expert intuition. We formalize expert alignment as a criterion for evaluating explanations with T-FIX, a benchmark spanning seven knowledge-intensive domains. In collaboration with domain experts, we develop novel metrics to measure the alignment of LLM explanations with expert judgment.
翻译:随着大型语言模型在知识密集型场景(如外科手术、天文学、心理治疗)中的部署,用户不仅期望获得答案,还要求对这些答案提供有意义的解释。在这些场景中,用户通常是领域专家(如医生、天体物理学家、心理学家),他们需要反映专家级推理过程的解释。然而,当前的评估方案主要强调解释的合理性或内部忠实性,未能捕捉解释内容是否真正符合专家直觉。我们通过T-FIX(一个涵盖七个知识密集型领域的基准)将专家对齐形式化为评估解释的标准。通过与领域专家合作,我们开发了新的度量指标,以衡量大型语言模型解释与专家判断的对齐程度。