The proliferation of online hate speech poses a significant threat to the harmony of the web. While explicit hate is easily recognized through overt slurs, implicit hate speech is often conveyed through sarcasm, irony, stereotypes, or coded language -- making it harder to detect. Existing hate speech detection models, which predominantly rely on surface-level linguistic cues, fail to generalize effectively across diverse stylistic variations. Moreover, hate speech spread on different platforms often targets distinct groups and adopts unique styles, potentially inducing spurious correlations between them and labels, further challenging current detection approaches. Motivated by these observations, we hypothesize that the generation of hate speech can be modeled as a causal graph involving key factors: contextual environment, creator motivation, target, and style. Guided by this graph, we propose CADET, a causal representation learning framework that disentangles hate speech into interpretable latent factors and then controls confounders, thereby isolating genuine hate intent from superficial linguistic cues. Furthermore, CADET allows counterfactual reasoning by intervening on style within the latent space, naturally guiding the model to robustly identify hate speech in varying forms. CADET demonstrates superior performance in comprehensive experiments, highlighting the potential of causal priors in advancing generalizable hate speech detection.
翻译:在线仇恨言论的激增对网络和谐构成重大威胁。虽然显性仇恨可通过公开的侮辱性语言轻易识别,但隐性仇恨言论往往通过讽刺、反讽、刻板印象或隐晦语言传达——这使得其更难被检测。现有的仇恨言论检测模型主要依赖表层语言线索,无法有效泛化至多样化的风格变体。此外,不同平台传播的仇恨言论通常针对不同群体并采用独特风格,可能诱导其与标签间的虚假关联,进一步挑战现有检测方法。基于这些观察,我们假设仇恨言论的生成可建模为包含关键因素的因果图:语境环境、创作者动机、目标对象和表达风格。在此图引导下,我们提出CADET——一种因果表征学习框架,其将仇恨言论解耦为可解释的潜在因子,进而控制混杂变量,从而将真实的仇恨意图从表层语言线索中分离出来。此外,CADET支持通过对潜在空间中的风格进行干预来实现反事实推理,自然引导模型鲁棒地识别不同形式的仇恨言论。综合实验表明CADET具有优越性能,凸显了因果先验在推进可泛化仇恨言论检测方面的潜力。