Hateful speech detection is a key component of content moderation, yet current evaluation frameworks rarely assess why a text is deemed hateful. We introduce \textsf{HateXScore}, a four-component metric suite designed to evaluate the reasoning quality of model explanations. It assesses (i) conclusion explicitness, (ii) faithfulness and causal grounding of quoted spans, (iii) protected group identification (policy-configurable), and (iv) logical consistency among these elements. Evaluated on six diverse hate speech datasets, \textsf{HateXScore} is intended as a diagnostic complement to reveal interpretability failures and annotation inconsistencies that are invisible to standard metrics like Accuracy or F1. Moreover, human evaluation shows strong agreement with \textsf{HateXScore}, validating it as a practical tool for trustworthy and transparent moderation. \textcolor{red}{Disclaimer: This paper contains sensitive content that may be disturbing to some readers.}
翻译:仇恨言论检测是内容审核的关键组成部分,然而当前的评估框架很少评估文本为何被判定为仇恨言论。我们引入了 \textsf{HateXScore},一个包含四个组件的度量套件,旨在评估模型解释的推理质量。它评估(i)结论的明确性,(ii)引用文本片段的忠实性与因果依据,(iii)受保护群体的识别(可配置策略),以及(iv)这些元素之间的逻辑一致性。在六个不同的仇恨言论数据集上进行评估后,\textsf{HateXScore} 旨在作为一种诊断性补充工具,以揭示诸如准确率或F1分数等标准度量所无法察觉的可解释性失败和标注不一致问题。此外,人工评估显示与 \textsf{HateXScore} 具有高度一致性,验证了其作为实现可信与透明审核的实用工具的价值。\textcolor{red}{免责声明:本文包含可能令部分读者感到不适的敏感内容。}