Authorship attribution aims to identify the origin or author of a document. Traditional approaches have heavily relied on manual features and fail to capture long-range correlations, limiting their effectiveness. Recent advancements leverage text embeddings from pre-trained language models, which require significant fine-tuning on labeled data, posing challenges in data dependency and limited interpretability. Large Language Models (LLMs), with their deep reasoning capabilities and ability to maintain long-range textual associations, offer a promising alternative. This study explores the potential of pre-trained LLMs in one-shot authorship attribution, specifically utilizing Bayesian approaches and probability outputs of LLMs. Our methodology calculates the probability that a text entails previous writings of an author, reflecting a more nuanced understanding of authorship. By utilizing only pre-trained models such as Llama-3-70B, our results on the IMDb and blog datasets show an impressive 85\% accuracy in one-shot authorship classification across ten authors. Our findings set new baselines for one-shot authorship analysis using LLMs and expand the application scope of these models in forensic linguistics. This work also includes extensive ablation studies to validate our approach.
翻译:作者归属识别旨在确定文档的来源或作者。传统方法严重依赖人工特征,且无法捕捉长距离关联,限制了其有效性。近期研究利用预训练语言模型的文本嵌入,但需要在标注数据上进行大量微调,存在数据依赖性强和可解释性有限的问题。大语言模型凭借其深度推理能力和保持长距离文本关联的特性,为这一领域提供了有前景的替代方案。本研究探索预训练大语言模型在单样本作者归属识别中的潜力,特别聚焦于贝叶斯方法与大语言模型的概率输出。我们的方法通过计算文本蕴含作者既往写作风格的概率,实现了对作者身份更精细化的理解。仅使用Llama-3-70B等预训练模型的情况下,我们在IMDb和博客数据集上对十位作者进行的单样本作者分类达到了85%的准确率。本研究为大语言模型在单样本作者分析中确立了新的基准,拓展了这些模型在司法语言学中的应用范围。本工作还包含详尽的消融实验以验证所提方法的有效性。