AI-based social media platforms has already transformed the nature of economic and social interaction. AI enables the massive scale and highly personalized nature of online information sharing that we now take for granted. Extensive attention has been devoted to the polarization that social media platforms appear to facilitate. However, a key implication of the transformation we are experiencing due to these AI-powered platforms has received much less attention: how platforms impact what observers of online discourse come to believe about community views. These observers include policymakers and legislators, who look to social media to gauge the prospects for policy and legislative change, as well as developers of AI models trained on large-scale internet data, whose outputs may similarly reflect a distorted view of public opinion. In this paper, we present a nested game-theoretic model to show how observed online opinion is produced by the interaction of the decisions made by users about whether and with what rhetorical intensity to share their opinions on a platform, the efforts of organizations (such as traditional media and advocacy organizations) that seek to encourage or discourage opinion-sharing online, and the operation of AI-powered recommender systems controlled by social media platforms. We show that signals from ideological organizations encourage an increase in rhetorical intensity, leading to the 'rational silence' of moderate users. This, in turn, creates a polarized impression of where average opinions lie. We also show that this observed polarization can also be amplified by recommender systems that encourage the formation of communities online that end up seeing a skewed sample of opinion. We also identify practical strategies platforms can implement, such as reducing exposure to signals from ideological organizations and a tailored approach to content moderation.
翻译:基于人工智能的社交媒体平台已经改变了经济和社会互动的本质。人工智能实现了大规模且高度个性化的在线信息共享,这种模式如今已被我们视为理所当然。社交媒体平台似乎助长的极化现象已受到广泛关注。然而,这些由人工智能驱动的平台所带来的变革中,一个关键影响却较少被关注:平台如何影响在线话语观察者对社群观点的认知。这些观察者包括政策制定者和立法者——他们通过社交媒体评估政策与立法变革的前景,以及基于大规模互联网数据训练的人工智能模型开发者——其输出同样可能反映被扭曲的公众意见。本文提出一个嵌套博弈论模型,用以说明观察到的在线意见如何由以下三者的互动产生:用户关于是否及以何种修辞强度在平台上分享观点的决策、试图鼓励或阻止在线意见分享的组织(如传统媒体和倡导组织)的努力,以及由社交媒体平台控制的人工智能推荐系统的运作。研究表明,意识形态组织的信号会刺激修辞强度的提升,导致温和用户的"理性沉默"。这进而造成对平均意见立场的极化印象。研究还发现,这种观察到的极化现象可能被推荐系统放大——这些系统鼓励在线社群的形成,最终使社群接触到的意见样本出现偏差。本文同时提出了平台可实施的实用策略,例如减少对意识形态组织信号的暴露,以及采取定制化的内容审核方法。