NSFW (Not Safe for Work) content, in the context of a dialogue, can have severe side effects on users in open-domain dialogue systems. However, research on detecting NSFW language, especially sexually explicit content, within a dialogue context has significantly lagged behind. To address this issue, we introduce CensorChat, a dialogue monitoring dataset aimed at NSFW dialogue detection. Leveraging knowledge distillation techniques involving GPT-4 and ChatGPT, this dataset offers a cost-effective means of constructing NSFW content detectors. The process entails collecting real-life human-machine interaction data and breaking it down into single utterances and single-turn dialogues, with the chatbot delivering the final utterance. ChatGPT is employed to annotate unlabeled data, serving as a training set. Rationale validation and test sets are constructed using ChatGPT and GPT-4 as annotators, with a self-criticism strategy for resolving discrepancies in labeling. A BERT model is fine-tuned as a text classifier on pseudo-labeled data, and its performance is assessed. The study emphasizes the importance of AI systems prioritizing user safety and well-being in digital conversations while respecting freedom of expression. The proposed approach not only advances NSFW content detection but also aligns with evolving user protection needs in AI-driven dialogues.
翻译:在开放域对话系统中,涉及色情内容(NSFW)的对话可能对用户造成严重负面影响。然而,对话语境中NSFW语言(尤其是露骨色情内容)的检测研究明显滞后。为解决该问题,我们提出了专用于NSFW对话检测的对话监测数据集CensorChat。该数据集利用涉及GPT-4与ChatGPT的知识蒸馏技术,为构建NSFW内容检测器提供了经济高效的途径。具体流程包括:收集真实人机交互数据,将其拆解为单轮对话单元和单轮对话片段(由聊天机器人生成最终回复),使用ChatGPT标注未标记数据作为训练集,通过ChatGPT与GPT-4构建理由验证集与测试集(采用自我批评策略解决标注分歧)。基于伪标注数据微调BERT文本分类器并评估其性能。本研究强调AI系统在数字对话中需兼顾用户安全与言论自由,所提方法不仅推进了NSFW内容检测技术发展,也契合了AI驱动对话中对用户保护需求的持续演进。