A growing suite of research illustrates the negative impact of social media bots in amplifying harmful information with widespread social implications. Social bot detection algorithms have been developed to help identify these bot agents efficiently. While such algorithms can help mitigate the harmful effects of social media bots, they operate within complex socio-technical systems that include users and organizations. As such, ethical considerations are key while developing and deploying these bot detection algorithms, especially at scales as massive as social media ecosystems. In this article, we examine the ethical implications for social bot detection systems through three pillars: training datasets, algorithm development, and the use of bot agents. We do so by surveying the training datasets of existing bot detection algorithms, evaluating existing bot detection datasets, and drawing on discussions of user experiences of people being detected as bots. This examination is grounded in the FATe framework, which examines Fairness, Accountability, and Transparency in consideration of tech ethics. We then elaborate on the challenges that researchers face in addressing ethical issues with bot detection and provide recommendations for research directions. We aim for this preliminary discussion to inspire more responsible and equitable approaches towards improving the social media bot detection landscape.
翻译:日益增多的研究表明,社交媒体机器人在放大有害信息方面产生了广泛的负面社会影响。为有效识别这些机器人代理,社交机器人检测算法应运而生。尽管此类算法有助于减轻社交媒体机器人的危害,但其运行于包含用户与组织的复杂社会技术系统之中。因此,在开发和部署这些机器人检测算法时,尤其是在社交媒体生态系统这种大规模场景中,伦理考量至关重要。本文通过三大支柱——训练数据集、算法开发和机器人代理的使用——系统审视社交机器人检测系统的伦理影响。我们通过调研现有机器人检测算法的训练数据集、评估现有机器人检测数据集,并借鉴用户被误判为机器人的体验讨论来实现这一分析。本研究以FATe框架为基础,该框架从技术伦理角度审视公平性、问责制和透明度。随后我们详细阐述了研究者在应对机器人检测伦理问题时所面临的挑战,并提出研究方向建议。我们期望通过此初步探讨,激发更负责任且公平的方法来改善社交媒体机器人检测的生态格局。