Automatic detection of hate and abusive language is essential to combat its online spread. Moreover, recognising and explaining hate speech serves to educate people about its negative effects. However, most current detection models operate as black boxes, lacking interpretability and explainability. In this context, Large Language Models (LLMs) have proven effective for hate speech detection and to promote interpretability. Nevertheless, they are computationally costly to run. In this work, we propose distilling big language models by using Chain-of-Thought to extract explanations that support the hate speech classification task. Having small language models for these tasks will contribute to their use in operational settings. In this paper, we demonstrate that distilled models deliver explanations of the same quality as larger models while surpassing them in classification performance. This dual capability, classifying and explaining, advances hate speech detection making it more affordable, understandable and actionable.
翻译:自动检测仇恨与辱骂性语言对于遏制其在网络上的传播至关重要。此外,识别并解释仇恨言论有助于教育公众认识其负面影响。然而,当前大多数检测模型以黑盒方式运行,缺乏可解释性与说明性。在此背景下,大型语言模型(LLMs)已被证明在仇恨言论检测及提升可解释性方面具有良好效果,但其计算运行成本高昂。本研究提出通过使用思维链来蒸馏大型语言模型,以提取支持仇恨言论分类任务的解释性信息。为这些任务部署小型语言模型将有助于其在实际应用场景中的使用。本文证明,蒸馏后的模型能够提供与大型模型同等质量的解释,同时在分类性能上超越后者。这种兼具分类与解释的双重能力,推动了仇恨言论检测技术的发展,使其更具经济性、可理解性与可操作性。