Current stance detection research typically relies on predicting stance based on given targets and text. However, in real-world social media scenarios, targets are neither predefined nor static but rather complex and dynamic. To address this challenge, we propose a novel task: zero-shot stance detection in the wild with Dynamic Target Generation and Multi-Target Adaptation (DGTA), which aims to automatically identify multiple target-stance pairs from text without prior target knowledge. We construct a Chinese social media stance detection dataset and design multi-dimensional evaluation metrics. We explore both integrated and two-stage fine-tuning strategies for large language models (LLMs) and evaluate various baseline models. Experimental results demonstrate that fine-tuned LLMs achieve superior performance on this task: the two-stage fine-tuned Qwen2.5-7B attains the highest comprehensive target recognition score of 66.99%, while the integrated fine-tuned DeepSeek-R1-Distill-Qwen-7B achieves a stance detection F1 score of 79.26%.
翻译:当前立场检测研究通常依赖于给定目标和文本来预测立场。然而,在现实社交媒体场景中,目标既非预先定义也非静态,而是复杂且动态的。为应对这一挑战,我们提出一项新颖任务:基于动态目标生成与多目标适应(DGTA)的开放场景零样本立场检测,其目标是在无先验目标知识的情况下,从文本中自动识别多个目标-立场对。我们构建了一个中文社交媒体立场检测数据集,并设计了多维评估指标。针对大语言模型(LLMs),我们探索了集成式与两阶段微调策略,并评估了多种基线模型。实验结果表明,经过微调的LLMs在此任务上取得了优越性能:两阶段微调的Qwen2.5-7B获得了最高的综合目标识别分数66.99%,而集成微调的DeepSeek-R1-Distill-Qwen-7B则实现了79.26%的立场检测F1分数。