TikTok has seen exponential growth as a platform, fuelled by the success of its proprietary recommender algorithm which serves tailored content to every user - though not without controversy. Users complain of their content being unfairly suppressed by ''the algorithm'', particularly users with marginalised identities such as LGBTQ+ users. Together with content removal, this suppression acts to censor what is shared on the platform. Journalists have revealed biases in automatic censorship, as well as human moderation. We investigate experiences of censorship on TikTok, across users marginalised by their gender, LGBTQ+ identity, disability or ethnicity. We survey 627 UK-based TikTok users and find that marginalised users often feel they are subject to censorship for content that does not violate community guidelines. We highlight many avenues for future research into censorship on TikTok, with a focus on users' folk theories, which greatly shape their experiences of the platform.
翻译:TikTok作为平台实现了指数级增长,这得益于其专有推荐算法的成功——该算法为每位用户提供定制化内容,但并非没有争议。用户抱怨其内容被"算法"不公平地压制,尤其是具有边缘化身份的用户,如LGBTQ+群体。这种压制与内容删除共同构成了平台上的审查机制。记者已揭露自动审查系统及人工审核中存在的偏见。本研究调查了TikTok上因性别、LGBTQ+身份、残疾或种族而边缘化的用户所经历的审查。通过对627名英国TikTok用户的调查,我们发现边缘化用户常感到自己因未违反社区准则的内容而遭受审查。我们指出了未来研究TikTok审查机制的多种路径,特别关注极大影响用户平台体验的民间理论。