Large language models (LLMs) are trained on broad corpora and then used in communities with specialized norms. Is providing LLMs with community rules enough for models to follow these norms? We evaluate LLMs' capacity to detect (Task 1) and correct (Task 2) biased Wikipedia edits according to Wikipedia's Neutral Point of View (NPOV) policy. LLMs struggled with bias detection, achieving only 64% accuracy on a balanced dataset. Models exhibited contrasting biases (some under- and others over-predicted bias), suggesting distinct priors about neutrality. LLMs performed better at generation, removing 79% of words removed by Wikipedia editors. However, LLMs made additional changes beyond Wikipedia editors' simpler neutralizations, resulting in high-recall but low-precision editing. Interestingly, crowdworkers rated AI rewrites as more neutral (70%) and fluent (61%) than Wikipedia-editor rewrites. Qualitative analysis found LLMs sometimes applied NPOV more comprehensively than Wikipedia editors but often made extraneous non-NPOV-related changes (such as grammar). LLMs may apply rules in ways that resonate with the public but diverge from community experts. While potentially effective for generation, LLMs may reduce editor agency and increase moderation workload (e.g., verifying additions). Even when rules are easy to articulate, having LLMs apply them like community members may still be difficult.
翻译:大型语言模型(LLMs)在广泛语料库上训练后,被应用于具有特定规范的社区中。仅为LLMs提供社区规则是否足以使其遵循这些规范?我们依据维基百科中立观点(NPOV)政策,评估了LLMs在检测(任务1)和修正(任务2)带有偏见的维基百科编辑方面的能力。LLMs在偏见检测方面表现欠佳,在平衡数据集上仅达到64%的准确率。不同模型表现出相反的偏见倾向(一些模型预测不足,另一些则过度预测偏见),这表明它们对中立性存在不同的先验认知。LLMs在文本生成方面表现更好,能够删除维基百科编辑所移除词汇的79%。然而,LLMs在维基百科编辑进行的简单中立化处理之外,做出了额外的修改,导致编辑行为呈现高召回率但低精确度的特点。有趣的是,众包工作者认为AI重写的文本比维基百科编辑重写的文本更具中立性(70%)和流畅性(61%)。定性分析发现,LLMs有时比维基百科编辑更全面地应用NPOV原则,但也经常做出与NPOV无关的多余修改(例如语法修正)。LLMs应用规则的方式可能更易获得公众认同,却与社区专家的做法存在分歧。尽管LLMs在生成任务上可能有效,但它们可能削弱编辑的自主性并增加审核工作量(例如验证新增内容)。即使规则易于阐述,让LLMs像社区成员一样应用这些规则仍然具有挑战性。