Content moderation on a global scale must navigate a complex array of local cultural distinctions, which can hinder effective enforcement. While global policies aim for consistency and broad applicability, they often miss the subtleties of regional language interpretation, cultural beliefs, and local legislation. This work introduces a flexible framework that enhances foundation language models with cultural knowledge. Our approach involves fine-tuning encoder-decoder models on media-diet data to capture cultural nuances, and applies a continued training regime to effectively integrate these models into a content moderation pipeline. We evaluate this framework in a case study of an online podcast platform with content spanning various regions. The results show that our culturally adapted models improve the accuracy of local violation detection and offer explanations that align more closely with regional cultural norms. Our findings reinforce the need for an adaptable content moderation approach that remains flexible in response to the diverse cultural landscapes it operates in and represents a step towards a more equitable and culturally sensitive framework for content moderation, demonstrating what is achievable in this domain.
翻译:在全球范围内进行内容审核必须应对复杂的本地文化差异,这往往阻碍了有效的执行。尽管全球性政策追求一致性与广泛适用性,但它们常常忽略了地区语言解读、文化信仰及地方法规的细微差别。本研究提出了一种灵活的框架,通过文化知识增强基础语言模型。我们的方法包括:基于媒体消费数据对编码器-解码器模型进行微调以捕捉文化细微特征,并采用持续训练机制将这些模型有效整合到内容审核流程中。我们通过一个涵盖多地区内容的在线播客平台案例研究评估该框架。结果表明,经过文化适配的模型提升了本地违规内容检测的准确性,并能提供更符合区域文化规范的解释。我们的研究结果进一步证实,内容审核方法需要具备适应性,以灵活应对其运作环境中多元的文化景观。这项工作代表了向更公平、更具文化敏感性的内容审核框架迈出的一步,展示了该领域可能实现的目标。