Recent research illustrates how AI can be developed and deployed in a manner detached from the concrete social context of application. By abstracting from the contexts of AI application, practitioners also disengage from the distinct normative structures that govern them. Building upon Helen Nissenbaum's framework of contextual integrity, I illustrate how disregard for contextual norms can threaten the integrity of a context with often decisive ethical implications. I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms. Echoing a persistent undercurrent in technology ethics of understanding emerging technologies as uncharted moral territory, certain approaches to AI ethics can promote a notion of AI as a novel and distinct realm for ethical deliberation, norm setting, and virtue cultivation. This narrative of AI as new ethical ground, however, can come at the expense of practitioners, policymakers and ethicists engaging with already established norms and virtues that were gradually cultivated to promote successful and responsible practice within concrete social contexts. In response, I question the current narrow prioritization in AI ethics of moral innovation over moral preservation. Engaging also with emerging foundation models, I advocate for a moderately conservative approach to the ethics of AI that prioritizes the responsible and considered integration of AI within established social contexts and their respective normative structures.
翻译:近期研究表明,人工智能的开发与部署可能脱离其应用的具体社会情境。通过抽离人工智能的应用情境,实践者也脱离了支配这些情境的独特规范结构。基于海伦·尼森鲍姆的情境完整性理论框架,本文阐释了对情境规范的忽视如何威胁情境的完整性,并常产生决定性的伦理影响。本文认为,推动负责任与合伦理人工智能的努力可能无意中助长并看似合法化这种对既有情境规范的忽视。与技术伦理中一贯将新兴技术视为未开拓道德领域的潜流相呼应,某些人工智能伦理方法可能宣扬一种观念,即将人工智能视为进行伦理审议、规范制定与美德培育的新颖独特场域。然而,这种将人工智能视为新伦理场域的叙事,可能以牺牲实践者、政策制定者和伦理学家与既有规范及美德的互动为代价——这些规范与美德是经过长期培育以促进具体社会情境中成功且负责任的实践的。对此,本文质疑当前人工智能伦理中道德创新优先于道德传承的狭隘取向。结合新兴的基础模型,本文主张一种适度保守的人工智能伦理路径,其优先考虑在既有社会情境及其相应规范结构中负责任且审慎地整合人工智能。