Quantifying how individuals react to social influence is crucial for tackling collective political behavior online. While many studies of opinion in public forums focus on social feedback, they often overlook the potential for human interactions to result in self-censorship. Here, we investigate political deliberation in online spaces by exploring the hypothesis that individuals may refrain from expressing minority opinions publicly due to being exposed to toxic behavior. Analyzing conversations under YouTube videos from six prominent US news outlets around the 2020 US presidential elections, we observe patterns of self-censorship signaling the influence of peer toxicity on users' behavior. Using hidden Markov models, we identify a latent state consistent with toxicity-driven silence. Such state is characterized by reduced user activity and a higher likelihood of posting toxic content, indicating an environment where extreme and antisocial behaviors thrive. Our findings offer insights into the intricacies of online political deliberation and emphasize the importance of considering self-censorship dynamics to properly characterize ideological polarization in digital spheres.
翻译:量化个体如何应对社会影响对于理解在线集体政治行为至关重要。尽管许多针对公共论坛观点的研究聚焦于社会反馈,但它们往往忽视了人际互动可能导致自我审查的可能性。本文通过检验以下假设来研究在线空间中的政治讨论:个体可能因暴露于毒性行为而避免公开表达少数派观点。通过分析2020年美国总统大选期间六家知名美国新闻机构YouTube视频下的对话,我们观察到自我审查的模式,这标志着同伴毒性对用户行为的影响。利用隐马尔可夫模型,我们识别出一种与毒性驱动沉默相一致的潜在状态。该状态的特征是用户活跃度降低及发布毒性内容的可能性增高,表明极端及反社会行为滋生的环境。我们的研究结果揭示了在线政治讨论的复杂性,并强调必须考虑自我审查动态以准确刻画数字领域的意识形态极化。