There is growing consensus that language model (LM) developers should not be the sole deciders of LM behavior, creating a need for methods that enable the broader public to collectively shape the behavior of LM systems that affect them. To address this need, we present Collective Constitutional AI (CCAI): a multi-stage process for sourcing and integrating public input into LMs-from identifying a target population to sourcing principles to training and evaluating a model. We demonstrate the real-world practicality of this approach by creating what is, to our knowledge, the first LM fine-tuned with collectively sourced public input and evaluating this model against a baseline model trained with established principles from a LM developer. Our quantitative evaluations demonstrate several benefits of our approach: the CCAI-trained model shows lower bias across nine social dimensions compared to the baseline model, while maintaining equivalent performance on language, math, and helpful-harmless evaluations. Qualitative comparisons of the models suggest that the models differ on the basis of their respective constitutions, e.g., when prompted with contentious topics, the CCAI-trained model tends to generate responses that reframe the matter positively instead of a refusal. These results demonstrate a promising, tractable pathway toward publicly informed development of language models.
翻译:当前日益增长的共识认为,语言模型(LM)开发者不应成为LM行为的唯一决定者,这催生了对新方法的需求——使更广泛的公众能够集体塑造影响他们的LM系统行为。为满足这一需求,我们提出集体宪法人工智能(CCAI):一个从确定目标群体到征集原则、再到训练和评估模型的多阶段流程,用于收集并整合公众输入至LM中。我们通过创建据我们所知首个基于集体征集公众输入进行微调的LM,并将其与基于LM开发者既定原则训练的基线模型进行对比评估,证明了该方法在现实世界中的实用性。定量评估显示我们的方法具有多重优势:与基线模型相比,CCAI训练模型在九个社会维度上表现出更低的偏见,同时在语言、数学及有益-无害评估中保持同等性能。模型的定性比较表明,二者差异源于各自遵循的宪法原则,例如当涉及争议话题时,CCAI训练模型倾向于生成积极重构问题的回应而非直接拒绝。这些结果证明了一条可行且有前景的路径,可实现公众参与的语言模型开发。