Stance detection is crucial for fostering a human-centric Web by analyzing user-generated content to identify biases and harmful narratives that undermine trust. With the development of Large Language Models (LLMs), existing approaches treat stance detection as a classification problem, providing robust methodologies for modeling complex group interactions and advancing capabilities in natural language tasks. However, these methods often lack interpretability, limiting their ability to offer transparent and understandable justifications for predictions. This study adopts a generative approach, where stance predictions include explicit, interpretable rationales, and integrates them into smaller language models through single-task and multitask learning. We find that incorporating reasoning into stance detection enables the smaller model (FlanT5) to outperform GPT-3.5's zero-shot performance, achieving an improvement of up to 9.57%. Moreover, our results show that reasoning capabilities enhance multitask learning performance but may reduce effectiveness in single-task settings. Crucially, we demonstrate that faithful rationales improve rationale distillation into SLMs, advancing efforts to build interpretable, trustworthy systems for addressing discrimination, fostering trust, and promoting equitable engagement on social media.
翻译:立场检测对于构建以人为本的网络环境至关重要,它通过分析用户生成内容来识别损害信任的偏见和有害叙事。随着大语言模型(LLMs)的发展,现有方法将立场检测视为分类问题,为建模复杂的群体互动提供了稳健的方法论,并提升了自然语言任务的处理能力。然而,这些方法通常缺乏可解释性,限制了其为预测提供透明且易于理解的解释的能力。本研究采用了一种生成式方法,其中立场预测包含明确、可解释的推理依据,并通过单任务和多任务学习将其整合到较小的语言模型中。我们发现,将推理融入立场检测能使较小模型(FlanT5)的表现超越GPT-3.5的零样本性能,实现了高达9.57%的提升。此外,我们的结果表明,推理能力在多任务学习中能提升性能,但在单任务设置中可能会降低效果。至关重要的是,我们证明了忠实可靠的推理依据能改善推理知识向较小语言模型的蒸馏过程,从而推动构建可解释、可信赖的系统,以应对社交媒体上的歧视、增进信任并促进公平参与。