There is widespread concern about the negative impacts of social media feed ranking algorithms on political polarization. Leveraging advancements in large language models (LLMs), we develop an approach to re-rank feeds in real-time to test the effects of content that is likely to polarize: expressions of antidemocratic attitudes and partisan animosity (AAPA). In a preregistered 10-day field experiment on X/Twitter with 1,256 consented participants, we increase or decrease participants' exposure to AAPA in their algorithmically curated feeds. We observe more positive outparty feelings when AAPA exposure is decreased and more negative outparty feelings when AAPA exposure is increased. Exposure to AAPA content also results in an immediate increase in negative emotions, such as sadness and anger. The interventions do not significantly impact traditional engagement metrics such as re-post and favorite rates. These findings highlight a potential pathway for developing feed algorithms that mitigate affective polarization by addressing content that undermines the shared values required for a healthy democracy.
翻译:当前普遍存在对社交媒体信息流排序算法加剧政治极化的担忧。借助大语言模型(LLM)的技术进展,我们开发了一种实时重排信息流的方法,以测试易引发极化的内容——即反民主态度与党派敌意(AAPA)表达——的影响。通过在X/Twitter平台开展为期10天、包含1,256名知情参与者的预注册田野实验,我们在算法推荐的信息流中系统性增加或减少参与者接触AAPA内容的机会。实验发现:减少AAPA内容接触能提升对对立党派的积极情感,而增加接触则会强化负面情感。接触AAPA内容还会立即引发悲伤、愤怒等负面情绪激增。值得注意的是,这些干预措施并未显著影响转发率、点赞率等传统参与度指标。本研究为开发新型信息流算法指明了一条潜在路径:通过调控侵蚀民主健康运行所需共同价值的内容,可有效缓解情感极化问题。