Opinion dynamics models how the publicly expressed opinions of users in a social network coevolve according to their neighbors as well as their own intrinsic opinion. Motivated by the real-world manipulation of social networks during the 2016 US elections and the 2019 Hong Kong protests, a growing body of work models the effects of a strategic actor who interferes with the network to induce disagreement or polarization. We lift the assumption of a single strategic actor by introducing a model in which any subset of network users can manipulate network outcomes. They do so by acting according to a fictitious intrinsic opinion. Strategic actors can have conflicting goals, and push competing narratives. We characterize the Nash Equilibrium of the resulting meta-game played by the strategic actors. Experiments on real-world social network datasets from Twitter, Reddit, and Political Blogs show that strategic agents can significantly increase polarization and disagreement, as well as increase the "cost" of the equilibrium. To this end, we give worst-case upper bounds on the Price of Misreporting (analogous to the Price of Anarchy). Finally, we give efficient learning algorithms for the platform to (i) detect whether strategic manipulation has occurred, and (ii) learn who the strategic actors are. Our algorithms are accurate on the same real-world datasets, suggesting how platforms can take steps to mitigate the effects of strategic behavior.
翻译:观点动力学模型研究社交网络中用户公开表达的观点如何根据其邻居及自身内在观点共同演化。受2016年美国大选和2019年香港抗议期间社交网络真实操纵案例的启发,越来越多的研究开始模拟战略行为者干扰网络以引发分歧或极化的影响。我们通过引入新模型突破了单一战略行为者的假设,在该模型中任何网络用户子集都能通过虚构内在观点来操纵网络结果。战略行为者可能具有相互冲突的目标,并推动相互竞争的叙事。我们刻画了战略行为者间形成的元博弈的纳什均衡特性。基于Twitter、Reddit和政治博客等真实社交网络数据集的实验表明,战略智能体能够显著加剧极化和分歧,同时提高均衡的"成本"。为此,我们给出了误报代价(类似于无政府代价)的最坏情况上界。最后,我们为平台提供了高效学习算法以:(i)检测是否发生战略操纵;(ii)识别战略行为者身份。我们的算法在相同真实数据集上表现准确,为平台采取措施缓解战略行为的影响提供了可行方案。