Indirect reciprocity, which means helping those who help others, is difficult to sustain among decentralized, self-interested LLM agents without reliable reputation systems. We introduce Agentic Linguistic Gossip Network (ALIGN), an automated framework where agents strategically share open-ended gossip using hierarchical tones to evaluate trustworthiness and coordinate social norms. We demonstrate that ALIGN consistently improves indirect reciprocity and resists malicious entrants by identifying and ostracizing defectors without changing intrinsic incentives. Notably, we find that stronger reasoning capabilities in LLMs lead to more incentive-aligned cooperation, whereas chat models often over-cooperate even when strategically suboptimal. These results suggest that leveraging LLM reasoning through decentralized gossip is a promising path for maintaining social welfare in agentic ecosystems. Our code is available at https://github.com/shuhui-zhu/ALIGN.
翻译:间接互惠——即帮助那些曾帮助他人者——在缺乏可靠声誉系统的去中心化自利性LLM智能体间难以持续。本文提出Agentic Linguistic Gossip Network(ALIGN),一种自动化框架,使智能体能够通过分层语调策略性地分享开放式流言,以评估可信度并协调社会规范。我们证明ALIGN在不改变内在激励的前提下,能持续提升间接互惠水平,并通过识别与排斥背叛者有效抵御恶意进入者。值得注意的是,我们发现LLM更强的推理能力会促使其合作行为更符合激励相容原则,而对话模型即使在策略次优时也常表现出过度合作倾向。这些结果表明,通过去中心化流言机制利用LLM的推理能力,是维持智能体生态系统社会福利的有效路径。代码已开源:https://github.com/shuhui-zhu/ALIGN。