The widespread dissemination of hate speech, harassment, harmful and sexual content, and violence across websites and media platforms presents substantial challenges and provokes widespread concern among different sectors of society. Governments, educators, and parents are often at odds with media platforms about how to regulate, control, and limit the spread of such content. Technologies for detecting and censoring the media contents are a key solution to addressing these challenges. Techniques from natural language processing and computer vision have been used widely to automatically identify and filter out sensitive content such as offensive languages, violence, nudity, and addiction in both text, images, and videos, enabling platforms to enforce content policies at scale. However, existing methods still have limitations in achieving high detection accuracy with fewer false positives and false negatives. Therefore, more sophisticated algorithms for understanding the context of both text and image may open rooms for improvement in content censorship to build a more efficient censorship system. In this paper, we evaluate existing LLM-based content moderation solutions such as OpenAI moderation model and Llama-Guard3 and study their capabilities to detect sensitive contents. Additionally, we explore recent LLMs such as GPT, Gemini, and Llama in identifying inappropriate contents across media outlets. Various textual and visual datasets like X tweets, Amazon reviews, news articles, human photos, cartoons, sketches, and violence videos have been utilized for evaluation and comparison. The results demonstrate that LLMs outperform traditional techniques by achieving higher accuracy and lower false positive and false negative rates. This highlights the potential to integrate LLMs into websites, social media platforms, and video-sharing services for regulatory and content moderation purposes.
翻译:仇恨言论、骚扰、有害与色情内容以及暴力在网站和媒体平台上的广泛传播带来了重大挑战,并引发了社会各界的普遍担忧。政府、教育工作者和家长常与媒体平台就如何监管、控制和限制此类内容的传播存在分歧。检测与审查媒体内容的技术是应对这些挑战的关键解决方案。自然语言处理和计算机视觉技术已被广泛用于自动识别并过滤文本、图像和视频中的敏感内容,如攻击性语言、暴力、裸露和成瘾性内容,使平台能够大规模执行内容政策。然而,现有方法在实现高检测精度、降低误报率和漏报率方面仍存在局限。因此,更复杂的文本与图像上下文理解算法可能为内容审查系统的改进提供空间,以构建更高效的审查体系。本文评估了现有的基于LLM的内容审核解决方案(如OpenAI审核模型和Llama-Guard3),并研究了其检测敏感内容的能力。此外,我们探索了GPT、Gemini和Llama等近期大语言模型在识别跨媒体不适当内容方面的表现。研究使用了多种文本和视觉数据集进行评估与比较,包括X推文、亚马逊评论、新闻文章、人物照片、卡通、素描和暴力视频。结果表明,LLM通过实现更高的准确率、更低的误报率和漏报率,优于传统技术。这凸显了将LLM整合到网站、社交媒体平台和视频分享服务中以实现监管和内容审核目的的潜力。