As Large Language Models increasingly mediate human communication and decision-making, understanding their value expression becomes critical for research across disciplines. This work presents the Ethics Engine, a modular Python pipeline that transforms psychometric assessment of LLMs from a technically complex endeavor into an accessible research tool. The pipeline demonstrates how thoughtful infrastructure design can expand participation in AI research, enabling investigators across cognitive science, political psychology, education, and other fields to study value expression in language models. Recent adoption by University of Edinburgh researchers studying authoritarianism validates its research utility, processing over 10,000 AI responses across multiple models and contexts. We argue that such tools fundamentally change the landscape of AI research by lowering technical barriers while maintaining scientific rigor. As LLMs increasingly serve as cognitive infrastructure, their embedded values shape millions of daily interactions. Without systematic measurement of these value expressions, we deploy systems whose moral influence remains uncharted. The Ethics Engine enables the rigorous assessment necessary for informed governance of these influential technologies.
翻译:随着大型语言模型日益介入人类沟通与决策过程,理解其价值表达已成为跨学科研究的关键课题。本研究提出伦理引擎——一个模块化的Python流程,将LLMs的心理测量评估从技术复杂的任务转化为可及的研究工具。该流程展示了深思熟虑的基础设施设计如何拓展人工智能研究的参与度,使认知科学、政治心理学、教育学等领域的学者能够研究语言模型中的价值表达。爱丁堡大学研究者在威权主义研究中对该工具的近期应用验证了其研究效用,已处理超过10,000份跨多模型与多情境的AI响应。我们认为此类工具通过降低技术门槛同时保持科学严谨性,正在从根本上改变人工智能研究的格局。随着LLMs日益成为认知基础设施,其内嵌价值观正塑造着数百万次的日常交互。若缺乏对这些价值表达的系统性测量,我们将部署道德影响未知的系统。伦理引擎为实现这些影响深远技术的知情治理提供了必要的严谨评估手段。