Auditing Large Language Models (LLMs) to discover their biases and preferences is an emerging challenge in creating Responsible Artificial Intelligence (AI). While various methods have been proposed to elicit the preferences of such models, countermeasures have been taken by LLM trainers, such that LLMs hide, obfuscate or point blank refuse to disclosure their positions on certain subjects. This paper presents PRISM, a flexible, inquiry-based methodology for auditing LLMs - that seeks to illicit such positions indirectly through task-based inquiry prompting rather than direct inquiry of said preferences. To demonstrate the utility of the methodology, we applied PRISM on the Political Compass Test, where we assessed the political leanings of twenty-one LLMs from seven providers. We show LLMs, by default, espouse positions that are economically left and socially liberal (consistent with prior work). We also show the space of positions that these models are willing to espouse - where some models are more constrained and less compliant than others - while others are more neutral and objective. In sum, PRISM can more reliably probe and audit LLMs to understand their preferences, biases and constraints.
翻译:审计大型语言模型(LLMs)以发现其偏见和偏好,是创建负责任人工智能(AI)领域的一个新兴挑战。虽然已有多种方法被提出以揭示此类模型的偏好,但LLM训练者已采取反制措施,使得LLMs在某些议题上隐藏、模糊或直接拒绝表明自身立场。本文提出PRISM——一种灵活的、基于询问的LLM审计方法论,该方法通过基于任务的询问提示间接探求模型立场,而非直接询问其偏好。为验证该方法的实用性,我们将PRISM应用于政治指南针测试,评估了来自七个提供商的二十一个LLMs的政治倾向。研究表明,LLMs默认支持经济左倾与社会自由主义的立场(与先前研究一致)。我们还揭示了这些模型愿意支持的立场空间——其中某些模型比其他模型更具约束性且更不配合,而另一些则更为中立客观。总之,PRISM能够更可靠地探测和审计LLMs,以理解其偏好、偏见与约束条件。