As generative foundation models improve, they also tend to become more persuasive, raising concerns that AI automation will enable governments, firms, and other actors to manipulate beliefs with unprecedented scale and effectiveness at virtually no cost. The full economic and social ramifications of this trend have been difficult to foresee, however, given that we currently lack a complete theoretical understanding of why persuasion is costly for human labor to produce in the first place. This paper places human and AI agents on a common conceptual footing by formalizing informational persuasion as a mathematical decision problem and characterizing its computational complexity. A novel proof establishes that persuasive messages are challenging to discover (NP-Hard) but easy to adopt if supplied by others (NP). This asymmetry helps explain why people are susceptible to persuasion, even in contexts where all relevant information is publicly available. The result also illuminates why litigation, strategic communication, and other persuasion-oriented activities have historically been so human capital intensive, and it provides a new theoretical basis for studying how AI will impact various industries.
翻译:随着生成式基础模型的改进,它们也往往变得更具说服力,这引发了人们对AI自动化将使政府、企业和其他行为者能够以前所未有的规模和有效性、几乎零成本地操纵信念的担忧。然而,由于我们目前对为何说服首先需要人类付出高昂成本缺乏完整的理论理解,这一趋势的全部经济和社会影响一直难以预见。本文通过将信息性说服形式化为一个数学决策问题并刻画其计算复杂性,将人类与AI智能体置于一个共同的概念基础之上。一项新颖的证明证实,有说服力的信息难以被发现(NP难),但如果由他人提供则易于采纳(NP)。这种不对称性有助于解释为何人们容易受到说服,即使是在所有相关信息都公开可得的背景下。该结果也阐明了为何诉讼、战略沟通及其他以说服为导向的活动历来如此依赖密集的人力资本,并为研究AI将如何影响各行各业提供了新的理论基础。