Large Language Models (LLMs) exhibit significant persuasion capabilities in one-on-one interactions, but their influence within social networks remains underexplored. This study investigates the potential social impact of LLMs in these environments, where interconnected users and complex opinion dynamics pose unique challenges. In particular, we address the following research question: can LLMs learn to generate meaningful content that maximizes user engagement on social networks? To answer this question, we define a pipeline to guide the LLM-based content generation which employs reinforcement learning with simulated feedback. In our framework, the reward is based on an engagement model borrowed from the literature on opinion dynamics and information propagation. Moreover, we force the text generated by the LLM to be aligned with a given topic and to satisfy a minimum fluency requirement. Using our framework, we analyze the capabilities and limitations of LLMs in tackling the given task, specifically considering the relative positions of the LLM as an agent within the social network and the distribution of opinions in the network on the given topic. Our findings show the full potential of LLMs in creating social engagement. Notable properties of our approach are that the learning procedure is adaptive to the opinion distribution of the underlying network and agnostic to the specifics of the engagement model, which is embedded as a plug-and-play component. In this regard, our approach can be easily refined for more complex engagement tasks and interventions in computational social science. The code used for the experiments is publicly available at https://anonymous.4open.science/r/EDCG/.
翻译:大型语言模型(LLM)在一对一交互中展现出显著的劝说能力,但其在社交网络中的影响力尚未得到充分探索。本研究探讨了LLM在这些环境中的潜在社会影响,其中相互连接的用户和复杂的观点动态带来了独特挑战。具体而言,我们致力于解决以下研究问题:LLM能否学会生成有意义的、能在社交网络上最大化用户参与度的内容?为回答该问题,我们设计了一套基于LLM的内容生成流程,该流程采用带有模拟反馈的强化学习方法。在我们的框架中,奖励机制借鉴了观点动态与信息传播研究中的参与度模型。此外,我们要求LLM生成的文本必须与给定主题保持一致,并满足最低流畅度要求。通过该框架,我们分析了LLM在执行该任务时的能力与局限性,特别考虑了LLM作为智能体在社交网络中的相对位置,以及网络中关于给定主题的观点分布。研究结果表明,LLM在创造社会参与度方面具有充分潜力。我们方法的重要特性在于:学习过程能自适应底层网络的观点分布,且对参与度模型的具体细节具有无关性——该模型可作为即插即用组件嵌入。因此,本方法可轻松优化以应对更复杂的参与度任务及计算社会科学中的干预实践。实验所用代码已公开于 https://anonymous.4open.science/r/EDCG/。