Satire detection is essential for accurately extracting opinions from textual data and combating misinformation online. However, the lack of diverse corpora for satire leads to the problem of stylistic bias which impacts the models' detection performances. This study proposes a debiasing approach for satire detection, focusing on reducing biases in training data by utilizing generative large language models. The approach is evaluated in both cross-domain (irony detection) and cross-lingual (English) settings. Results show that the debiasing method enhances the robustness and generalizability of the models for satire and irony detection tasks in Turkish and English. However, its impact on causal language models, such as Llama-3.1, is limited. Additionally, this work curates and presents the Turkish Satirical News Dataset with detailed human annotations, with case studies on classification, debiasing, and explainability.
翻译:讽刺检测对于准确从文本数据中提取观点和打击网络虚假信息至关重要。然而,缺乏多样化的讽刺语料库导致了风格偏见问题,影响了模型的检测性能。本研究提出了一种用于讽刺检测的去偏见方法,重点在于利用生成式大语言模型减少训练数据中的偏见。该方法在跨领域(反讽检测)和跨语言(英语)设置下进行了评估。结果表明,该去偏见方法增强了模型在土耳其语和英语讽刺及反讽检测任务中的鲁棒性和泛化能力。然而,其对因果语言模型(如Llama-3.1)的影响有限。此外,本研究整理并发布了带有详细人工标注的土耳其语讽刺新闻数据集,并提供了关于分类、去偏见和可解释性的案例分析。