Multi-agent systems - systems with multiple independent AI agents working together to achieve a common goal - are becoming increasingly prevalent in daily life. Drawing inspiration from the phenomenon of human group social influence, we investigate whether a group of AI agents can create social pressure on users to agree with them, potentially changing their stance on a topic. We conducted a study in which participants discussed social issues with either a single or multiple AI agents, and where the agents either agreed or disagreed with the user's stance on the topic. We found that conversing with multiple agents (holding conversation content constant) increased the social pressure felt by participants, and caused a greater shift in opinion towards the agents' stances on each topic. Our study shows the potential advantages of multi-agent systems over single-agent platforms in causing opinion change. We discuss design implications for possible multi-agent systems that promote social good, as well as the potential for malicious actors to use these systems to manipulate public opinion.
翻译:多智能体系统——由多个独立的AI智能体协同工作以实现共同目标的系统——在日常生活中正变得越来越普遍。受人类群体社会影响现象的启发,我们研究了一组AI智能体是否能够对用户施加社会压力,促使用户同意其观点,并可能改变用户对特定议题的立场。我们开展了一项研究,让参与者与单个或多个AI智能体讨论社会议题,这些智能体或支持或反对参与者在议题上的原有立场。研究发现,与多个智能体进行对话(保持对话内容不变)会增加参与者感知到的社会压力,并导致其观点更大幅度地向智能体立场偏移。我们的研究表明,在引发观点改变方面,多智能体系统相较于单智能体平台具有潜在优势。我们探讨了促进社会公益的多智能体系统可能的设计方案,同时也分析了恶意行为者利用此类系统操纵舆论的潜在风险。