As AI systems become pervasive, grounding their behavior in human values is critical. Prior work suggests that language models (LMs) exhibit limited inherent moral reasoning, leading to calls for explicit moral teaching. However, constructing ground truth data for moral evaluation is difficult given plural frameworks and pervasive biases. We investigate unsupervised elicitation as an alternative, asking whether pretrained (base) LMs possess intrinsic moral reasoning capability that can be surfaced without human supervision. Using the Internal Coherence Maximization (ICM) algorithm across three benchmark datasets and four LMs, we test whether ICM can reliably label moral judgments, generalize across moral frameworks, and mitigate social bias. Results show that ICM outperforms all pre-trained and chatbot baselines on the Norm Bank and ETHICS benchmarks, while fine-tuning on ICM labels performs on par with or surpasses those of human labels. Across theoretically motivated moral frameworks, ICM yields its largest relative gains on Justice and Commonsense morality. Furthermore, although chatbot LMs exhibit social bias failure rates comparable to their pretrained ones, ICM reduces such errors by more than half, with the largest improvements in race, socioeconomic status, and politics. These findings suggest that pretrained LMs possess latent moral reasoning capacities that can be elicited through unsupervised methods like ICM, providing a scalable path for AI alignment.
翻译:随着人工智能系统日益普及,将其行为根植于人类价值观变得至关重要。先前研究表明,语言模型(LMs)表现出的内在道德推理能力有限,这引发了对其明确道德教育的呼吁。然而,鉴于多元道德框架和普遍存在的偏见,构建用于道德评估的基准数据十分困难。我们探索无监督激发作为替代方案,研究预训练(基础)语言模型是否具备无需人类监督即可显现的内在道德推理能力。通过在三个基准数据集和四种语言模型上应用内部一致性最大化(ICM)算法,我们测试ICM能否可靠标注道德判断、跨道德框架泛化并减轻社会偏见。结果表明,在Norm Bank和ETHICS基准测试中,ICM的表现优于所有预训练模型和聊天机器人基线,而基于ICM标签的微调效果与人类标签相当或更优。在理论驱动的道德框架中,ICM在正义和常识道德方面取得最大的相对增益。此外,尽管聊天机器人语言模型的社会偏见失败率与其预训练版本相当,但ICM能将此类错误减少一半以上,其中在种族、社会经济地位和政治领域的改进最为显著。这些发现表明,预训练语言模型具有潜在的道德推理能力,可通过ICM等无监督方法激发,为人工智能对齐提供了可扩展的路径。