Since 1887, administrative law has navigated a "capability-accountability trap": technological change forces government to become more sophisticated, but sophistication renders agencies opaque to generalist overseers like the courts and Congress. The law's response--substituting procedural review for substantive oversight--has produced a sedimentary accretion of requirements that ossify capacity without ensuring democratic control. This Article argues that the Supreme Court's post-Loper Bright retrenchment is best understood as an effort to shrink administration back to comprehensible size in response to this complexification. But reducing complexity in this way sacrifices capability precisely when climate change, pandemics, and AI risks demand more sophisticated governance. AI offers a different path. Unlike many prior administrative technologies that increased opacity alongside capacity, AI can help build "scrutability" in government, translating technical complexity into accessible terms, surfacing the assumptions that matter for oversight, and enabling substantive verification of agency reasoning. This Article proposes three doctrinal innovations within administrative law to realize this potential: a Model and System Dossier (documenting model purpose, evaluation, monitoring, and versioning) extending the administrative record to AI decision-making; a material-model-change trigger specifying when AI updates require new process; and a "deference to audit" standard that rewards agencies for auditable evaluation of their AI tools. The result is a framework for what this Article calls the "Fourth Settlement," administrative law that escapes the capability-accountability trap by preserving capability while restoring comprehensible oversight of administration.
翻译:自1887年以来,行政法始终深陷于"能力-问责困境":技术变革迫使政府运作日益精密化,但这种精密化导致行政机关对法院和国会等综合性监督机构变得不透明。法律对此的回应——以程序性审查替代实质性监督——已催生出层层叠加的程序要求,这些要求固化了行政能力却未能确保民主监督。本文认为,最高法院在"洛珀·布赖特案"后的限权举措,最应被理解为针对这种复杂化趋势、试图将行政体系压缩至可理解规模的尝试。然而,这种降低复杂度的方式恰恰在气候变化、流行病和人工智能风险亟需更精密治理之时牺牲了行政能力。人工智能提供了另一条路径。与许多以往在提升能力同时加剧不透明性的行政技术不同,人工智能有助于构建政府的"可审查性":将技术复杂性转化为可理解的表述,揭示对监督至关重要的前提假设,并实现对行政机关推理过程的实质性核验。为实现这一潜力,本文提出行政法领域的三项学理创新:将行政记录延伸至人工智能决策的《模型与系统档案》(记录模型目标、评估、监测与版本迭代);规定人工智能更新何时需启动新程序的重大模型变更触发机制;以及通过"审计遵从尊重"标准,对行政机关就其人工智能工具开展可审计评估的行为予以激励。由此形成的框架,正是本文所称的"第四次和解"——一种通过保留行政能力同时恢复可理解监督,从而摆脱能力-问责困境的行政法体系。