The ability to accurately identify authorship is crucial for verifying content authenticity and mitigating misinformation. Large Language Models (LLMs) have demonstrated an exceptional capacity for reasoning and problem-solving. However, their potential in authorship analysis remains under-explored. Traditional studies have depended on hand-crafted stylistic features, whereas state-of-the-art approaches leverage text embeddings from pre-trained language models. These methods, which typically require fine-tuning on labeled data, often suffer from performance degradation in cross-domain applications and provide limited explainability. This work seeks to address three research questions: (1) Can LLMs perform zero-shot, end-to-end authorship verification effectively? (2) Are LLMs capable of accurately attributing authorship among multiple candidates authors (e.g., 10 and 20)? (3) Can LLMs provide explainability in authorship analysis, particularly through the role of linguistic features? Moreover, we investigate the integration of explicit linguistic features to guide LLMs in their reasoning processes. Our assessment demonstrates LLMs' proficiency in both tasks without the need for domain-specific fine-tuning, providing explanations into their decision making via a detailed analysis of linguistic features. This establishes a new benchmark for future research on LLM-based authorship analysis.
翻译:准确识别作者身份的能力对于验证内容真实性和减少错误信息至关重要。大型语言模型(LLMs)已展现出卓越的推理和问题解决能力。然而,其在作者身份分析方面的潜力仍未得到充分探索。传统研究依赖于手工构建的风格特征,而最先进的方法则利用预训练语言模型生成的文本嵌入。这些通常需要在标注数据上进行微调的方法,在跨领域应用中常面临性能下降的问题,且可解释性有限。本研究旨在解决三个研究问题:(1)LLMs能否有效执行零样本、端到端的作者身份验证?(2)LLMs能否在多位候选作者(例如10位和20位)中准确归属作者身份?(3)LLMs能否在作者身份分析中提供可解释性,特别是通过语言特征的作用?此外,我们研究了如何整合显式语言特征以指导LLMs的推理过程。我们的评估表明,LLMs在无需领域特定微调的情况下,能熟练完成这两项任务,并通过语言特征的详细分析为其决策提供解释。这为未来基于LLM的作者身份分析研究确立了新的基准。