Counterfactual statements, which describe events that did not or cannot take place, are beneficial to numerous NLP applications. Hence, we consider the problem of counterfactual detection (CFD) and seek to enhance the CFD models. Previous models are reliant on clue phrases to predict counterfactuality, so they suffer from significant performance drop when clue phrase hints do not exist during testing. Moreover, these models tend to predict non-counterfactuals over counterfactuals. To address these issues, we propose to integrate neural topic model into the CFD model to capture the global semantics of the input statement. We continue to causally intervene the hidden representations of the CFD model to balance the effect of the class labels. Extensive experiments show that our approach outperforms previous state-of-the-art CFD and bias-resolving methods in both the CFD and other bias-sensitive tasks.
翻译:反事实陈述描述未发生或不可能发生的事件,对众多自然语言处理应用具有重要价值。因此,我们研究反事实检测问题,旨在提升检测模型的性能。现有模型依赖线索短语预测反事实性,当测试阶段缺乏线索短语提示时,其性能会出现显著下降。此外,这些模型倾向于将反事实样本预测为非反事实样本。为解决这些问题,我们提出将神经主题模型集成至反事实检测模型中,以捕捉输入语句的全局语义。我们进一步对检测模型的隐层表示进行因果干预,以平衡类别标签的影响。大量实验表明,本方法在反事实检测及其他偏差敏感任务中均优于现有最优的反事实检测方法与偏差消解方法。