The use of artificial intelligence (AI) in the public sector is best understood as a continuation and intensification of long standing rationalization and bureaucratization processes. Drawing on Weber, we take the core of these processes to be the replacement of traditions with instrumental rationality, i.e., the most calculable and efficient way of achieving any given policy objective. In this article, we demonstrate how much of the criticisms, both among the public and in scholarship, directed towards AI systems spring from well known tensions at the heart of Weberian rationalization. To illustrate this point, we introduce a thought experiment whereby AI systems are used to optimize tax policy to advance a specific normative end, reducing economic inequality. Our analysis shows that building a machine-like tax system that promotes social and economic equality is possible. However, it also highlights that AI driven policy optimization (i) comes at the exclusion of other competing political values, (ii) overrides citizens sense of their noninstrumental obligations to each other, and (iii) undermines the notion of humans as self-determining beings. Contemporary scholarship and advocacy directed towards ensuring that AI systems are legal, ethical, and safe build on and reinforce central assumptions that underpin the process of rationalization, including the modern idea that science can sweep away oppressive systems and replace them with a rule of reason that would rescue humans from moral injustices. That is overly optimistic. Science can only provide the means, they cannot dictate the ends. Nonetheless, the use of AI in the public sector can also benefit the institutions and processes of liberal democracies. Most importantly, AI driven policy optimization demands that normative ends are made explicit and formalized, thereby subjecting them to public scrutiny and debate.
翻译:公共部门对人工智能(AI)的应用,最好被理解为长期存在的理性化与官僚化进程的延续与强化。借鉴韦伯的观点,我们认为这些进程的核心在于以工具理性取代传统,即采用最可计算、最高效的方式实现任何既定政策目标。本文论证了公众与学术界对AI系统的诸多批评,实则源于韦伯式理性化内核中众所周知的张力。为阐明此点,我们引入一个思想实验:假设利用AI系统优化税收政策,以推进减少经济不平等这一特定规范性目标。我们的分析表明,构建一个促进社会经济平等的机器式税收体系是可能的。然而,分析也凸显出:AI驱动的政策优化(i)以排斥其他竞争性政治价值为代价,(ii)压制了公民彼此间非工具性义务的感知,(iii)削弱了人作为自我决定主体的观念。当前旨在确保AI系统合法、合乎道德与安全的学术研究与倡导,继承并强化了支撑理性化进程的核心假设,包括现代观念中科学能够扫除压迫性体系,并以理性规则取而代之,从而将人类从道德不公中解救出来。这种观点过于乐观。科学只能提供手段,无法规定目的。尽管如此,AI在公共部门的应用亦能有益于自由民主的制度与进程。最重要的是,AI驱动的政策优化要求将规范性目标明确化与形式化,从而使其接受公众审视与辩论。