Mainstream AI ethics, with its reliance on top-down, principle-driven frameworks, fails to account for the situated realities of diverse communities affected by AI (Artificial Intelligence). Critics have argued that AI ethics frequently serves corporate interests through practices of 'ethics washing', operating more as a tool for public relations than as a means of preventing harm or advancing the common good. As a result, growing scepticism among critical scholars has cast the field as complicit in sustaining harmful systems rather than challenging or transforming them. In response, this paper adopts a Science and Technology Studies (STS) perspective to critically interrogate the field of AI ethics. It hence applies the same analytic tools STS has long directed at disciplines such as biology, medicine, and statistics to ethics. This perspective reveals a core tension between vertical (top-down, principle-based) and horizontal (risk-mitigating, implementation-oriented) approaches to ethics. By tracing how these models have shaped the discourse, we show how both fall short in addressing the complexities of AI as a socio-technical assemblage, embedded in practice and entangled with power. To move beyond these limitations, we propose a threefold reorientation of AI ethics. First, we call for a shift in foundations: from top-down abstraction to empirical grounding. Second, we advocate for pluralisation: moving beyond Western-centric frameworks toward a multiplicity of onto-epistemic perspectives. Finally, we outline strategies for reconfiguring AI ethics as a transformative force, moving from narrow paradigms of risk mitigation toward co-creating technologies of hope.
翻译:主流人工智能伦理依赖自上而下、原则驱动的框架,未能充分考虑受人工智能影响的多元社群所处的情境现实。批评者指出,人工智能伦理常通过"伦理漂白"实践服务于企业利益,其运作更像公共关系工具而非预防危害或促进公共福祉的手段。因此,批判学界日益增长的怀疑态度将该领域视为维持有害体系的共谋而非挑战或变革力量。作为回应,本文采用科学技术研究的视角对人工智能伦理领域进行批判性质询,将STS长期以来应用于生物学、医学和统计学等学科的分析工具延伸至伦理领域。这一视角揭示了垂直型(自上而下、基于原则)与水平型(风险缓解、实施导向)伦理方法之间的核心张力。通过追溯这两种模型如何塑造话语体系,我们揭示二者均未能充分应对人工智能作为嵌入实践并与权力交织的社会技术集合体的复杂性。为突破这些局限,我们提出人工智能伦理的三重转向:首先,呼吁基础转变——从自上而下的抽象化转向经验性根基;其次,倡导多元化——超越西方中心框架,拥抱多重本体认识论视角;最后,我们规划将人工智能伦理重构为变革力量的策略,推动从狭隘的风险缓解范式转向共同创造希望技术。