Existing challenges in misinformation exposure and susceptibility vary across demographic groups, as some populations are more vulnerable to misinformation than others. Large language models (LLMs) introduce new dimensions to these challenges through their ability to generate persuasive content at scale and reinforcing existing biases. This study investigates the bidirectional persuasion dynamics between LLMs and humans when exposed to misinformative content. We analyze human-to-LLM influence using human-stance datasets and assess LLM-to-human influence by generating LLM-based persuasive arguments. Additionally, we use a multi-agent LLM framework to analyze the spread of misinformation under persuasion among demographic-oriented LLM agents. Our findings show that demographic factors influence susceptibility to misinformation in LLMs, closely reflecting the demographic-based patterns seen in human susceptibility. We also find that, similar to human demographic groups, multi-agent LLMs exhibit echo chamber behavior. This research explores the interplay between humans and LLMs, highlighting demographic differences in the context of misinformation and offering insights for future interventions.
翻译:现有研究显示,不同人口统计群体在错误信息暴露和易感性方面存在显著差异,某些群体相比其他群体更易受到错误信息影响。大型语言模型(LLMs)因其大规模生成说服性内容的能力及对现有偏见的强化作用,为这些挑战引入了新的维度。本研究探讨了当接触错误信息内容时,LLMs与人类之间的双向说服动态。我们利用人类立场数据集分析人类对LLMs的影响,并通过生成基于LLM的说服性论证来评估LLM对人类的影响。此外,我们采用多智能体LLM框架,分析在人口统计导向的LLM智能体间说服作用下错误信息的传播。研究结果表明,人口统计因素会影响LLMs对错误信息的易感性,这与人类易感性中基于人口统计的模式高度吻合。我们还发现,与人类人口统计群体类似,多智能体LLMs会表现出回声室行为。本研究探索了人类与LLMs之间的相互作用,揭示了错误信息背景下的人口统计差异,并为未来干预措施提供了见解。