Video-sharing social media platforms, such as TikTok, YouTube, and Instagram, implement content moderation policies aimed at reducing exposure to harmful videos among minor users. As video has become the dominant and most immersive form of online content, understanding how effectively this medium is moderated for younger audiences is urgent. In this study, we evaluated the effectiveness of video moderation for different age groups on three of the main video-sharing platforms: TikTok, YouTube, and Instagram. We created experimental accounts for the children assigned ages 13 and 18. Using these accounts, we evaluated 3,000 videos served up by the social media platforms, in passive scrolling and search modes, recording the frequency and speed at which harmful videos were encountered. Each video was manually assessed for level and type of harm, using definitions from a unified framework of harmful content. The results show that for passive scrolling or search-based scrolling, accounts assigned to the age 13 group encountered videos that were deemed harmful, more frequently and quickly than those assigned to the age 18 group. On YouTube, 15\% of recommended videos to 13-year-old accounts during passive scrolling were assessed as harmful, compared to 8.17\% for 18-year-old accounts. On YouTube, videos labelled as harmful appeared within an average of 3:06 minutes of passive scrolling for the younger age group. Exposure occurred without user-initiated searches, indicating weaknesses in the algorithmic filtering systems. These findings point to significant gaps in current video moderation practices by social media platforms. Furthermore, the ease with which underage users can misrepresent their age demonstrates the urgent need for more robust verification methods.
翻译:以TikTok、YouTube和Instagram为代表的视频分享社交媒体平台实施了内容审核政策,旨在减少未成年用户接触有害视频。随着视频成为主导且最具沉浸感的在线内容形式,理解该媒介如何为年轻受众进行有效审核已刻不容缓。本研究评估了三大主流视频分享平台(TikTok、YouTube和Instagram)针对不同年龄段用户的视频审核效果。我们为儿童创建了设定年龄为13岁和18岁的实验账户,通过被动滚动浏览和搜索两种模式,评估了社交媒体平台推送的3000个视频,记录接触有害视频的频率与速度。依据统一有害内容框架的定义,我们对每个视频的危害等级和类型进行了人工评估。结果显示:在被动滚动或基于搜索的滚动浏览中,13岁年龄组账户比18岁年龄组账户更频繁、更快速地接触到被判定为有害的视频。在YouTube平台上,被动滚动期间推荐给13岁账户的视频中有15%被评估为有害,而18岁账户的这一比例仅为8.17%。对于低龄组,有害视频在平均3分06秒的被动滚动时间内即会出现。这些暴露均非用户主动搜索所致,表明算法过滤系统存在缺陷。研究结果揭示了社交媒体平台当前视频审核实践中的显著漏洞。此外,未成年用户能够轻易虚报年龄的现象,凸显了建立更严格验证机制的迫切需求。