Recent advances in the field of generative artificial intelligence (AI) have blurred the lines between authentic and machine-generated content, making it almost impossible for humans to distinguish between such media. One notable consequence is the use of AI-generated images for fake profiles on social media. While several types of disinformation campaigns and similar incidents have been reported in the past, a systematic analysis has been lacking. In this work, we conduct the first large-scale investigation of the prevalence of AI-generated profile pictures on Twitter. We tackle the challenges of a real-world measurement study by carefully integrating various data sources and designing a multi-stage detection pipeline. Our analysis of nearly 15 million Twitter profile pictures shows that 0.052% were artificially generated, confirming their notable presence on the platform. We comprehensively examine the characteristics of these accounts and their tweet content, and uncover patterns of coordinated inauthentic behavior. The results also reveal several motives, including spamming and political amplification campaigns. Our research reaffirms the need for effective detection and mitigation strategies to cope with the potential negative effects of generative AI in the future.
翻译:生成式人工智能(AI)领域的最新进展模糊了真实内容与机器生成内容之间的界限,使得人类几乎无法区分此类媒体。一个值得注意的后果是社交媒体上虚假个人资料对AI生成图像的使用。尽管过去已报道过多种类型的虚假信息宣传活动及类似事件,但一直缺乏系统性的分析。在本研究中,我们首次对Twitter上AI生成个人头像的普遍性进行了大规模调查。通过精心整合多种数据源并设计多阶段检测流程,我们应对了现实世界测量研究中的挑战。我们对近1500万个Twitter个人头像的分析表明,其中0.052%为人工生成,证实了此类头像在平台上的显著存在。我们全面检视了这些账户的特征及其推文内容,并发现了协同非真实行为的模式。结果还揭示了包括垃圾信息推送和政治宣传在内的多种动机。我们的研究再次表明,需要制定有效的检测与缓解策略以应对未来生成式AI可能带来的负面影响。