Introduction. AI Ethics is framed distinctly across actors and stakeholder groups. We report results from a case study of OpenAI analysing ethical AI discourse. Method. Research addressed: How has OpenAI's public discourse leveraged 'ethics', 'safety', 'alignment' and adjacent related concepts over time, and what does discourse signal about framing in practice? A structured corpus, differentiating between communication for a general audience and communication with an academic audience, was assembled from public documentation. Analysis. Qualitative content analysis of ethical themes combined inductively derived and deductively applied codes. Quantitative analysis leveraged computational content analysis methods via NLP to model topics and quantify changes in rhetoric over time. Visualizations report aggregate results. For reproducible results, we have released our code at https://github.com/famous-blue-raincoat/AI_Ethics_Discourse. Results. Results indicate that safety and risk discourse dominate OpenAI's public communication and documentation, without applying academic and advocacy ethics frameworks or vocabularies. Conclusions. Implications for governance are presented, along with discussion of ethics-washing practices in industry.
翻译:引言:人工智能伦理在不同行动者和利益相关者群体中呈现出显著差异化的框架构建。本文通过OpenAI案例研究,分析其人工智能伦理论述。方法:本研究聚焦以下问题:OpenAI的公共论述如何随时间推移运用"伦理"、"安全"、"对齐"及相关概念?这些论述在实践中传递出何种框架特征?我们通过公开文档构建结构化语料库,区分面向公众的传播和面向学术界的交流。分析:采用定性内容分析法,结合归纳推导与演绎应用的编码体系解析伦理主题;同时运用自然语言处理技术进行量化内容分析,通过主题建模和修辞演变计量实现历时性研究。可视化图表呈现聚合分析结果。为保障可复现性,相关代码已发布于https://github.com/famous-blue-raincoat/AI_Ethics_Discourse。结果:研究表明,安全与风险论述在OpenAI的公共传播及文档中占据主导地位,而学术伦理框架及倡导性伦理术语体系未得到系统应用。结论:本文提出治理启示,并探讨产业界的伦理粉饰现象。