Attributing outputs from Large Language Models (LLMs) in adversarial settings-such as cyberattacks and disinformation-presents significant challenges that are likely to grow in importance. We investigate this attribution problem using formal language theory, specifically language identification in the limit as introduced by Gold and extended by Angluin. By modeling LLM outputs as formal languages, we analyze whether finite text samples can uniquely pinpoint the originating model. Our results show that due to the non-identifiability of certain language classes, under some mild assumptions about overlapping outputs from fine-tuned models it is theoretically impossible to attribute outputs to specific LLMs with certainty. This holds also when accounting for expressivity limitations of Transformer architectures. Even with direct model access or comprehensive monitoring, significant computational hurdles impede attribution efforts. These findings highlight an urgent need for proactive measures to mitigate risks posed by adversarial LLM use as their influence continues to expand.
翻译:在对抗性场景(如网络攻击和虚假信息传播)中,对大型语言模型(LLMs)的输出进行溯源,构成了重大挑战,且其重要性可能日益凸显。我们利用形式语言理论,特别是Gold提出并由Angluin扩展的极限语言识别理论,来研究这一溯源问题。通过将LLM输出建模为形式语言,我们分析了有限文本样本能否唯一确定其来源模型。我们的结果表明,由于某些语言类的不可识别性,在关于微调模型输出重叠的一些温和假设下,理论上不可能确定地将输出溯源到特定的LLMs。即使考虑Transformer架构的表达能力限制,这一结论依然成立。即使拥有直接模型访问权限或全面的监控,重大的计算障碍也会阻碍溯源工作。这些发现凸显了随着LLMs影响力的持续扩大,迫切需要采取主动措施来缓解其对抗性使用所带来的风险。