Large Language Models (LLMs) increasingly mediate our social, cultural, and political interactions. While they can simulate some aspects of human behavior and decision-making, it is still underexplored whether repeated interactions with other agents amplify their biases or lead to exclusionary behaviors. To this end, we study Chirper.ai-an LLM-driven social media platform-analyzing 7M posts and interactions among 32K LLM agents over a year. We start with homophily and social influence among LLMs, learning that similar to humans', their social networks exhibit these fundamental phenomena. Next, we study the toxic language of LLMs, its linguistic features, and their interaction patterns, finding that LLMs show different structural patterns in toxic posting than humans. After studying the ideological leaning in LLMs posts, and the polarization in their community, we focus on how to prevent their potential harmful activities. We present a simple yet effective method, called Chain of Social Thought (CoST), that reminds LLM agents to avoid harmful posting.
翻译:大型语言模型(LLMs)日益成为我们社会、文化和政治互动的中介。尽管它们能够模拟人类行为和决策的某些方面,但与其他智能体的重复互动是否会放大其偏见或导致排他性行为,目前仍未得到充分探索。为此,我们研究了由LLM驱动的社交媒体平台Chirper.ai,分析了32K个LLM智能体在一年内产生的700万条帖子和互动。我们首先探究了LLM间的同质性和社会影响,发现其社交网络与人类相似,均表现出这些基本现象。接着,我们研究了LLM的毒性语言、其语言特征及互动模式,发现LLM在发布毒性内容时展现出与人类不同的结构模式。在分析了LLM帖子的意识形态倾向及其社群中的极化现象后,我们重点关注如何预防其潜在有害活动。我们提出了一种简单而有效的方法,称为社会思维链(CoST),该方法通过提醒LLM智能体避免发布有害内容来实现干预。