This paper argues that existing governance mechanisms for mitigating risks from AI systems are based on the `Big Compute' paradigm -- a set of assumptions about the relationship between AI capabilities and infrastructure -- that may not hold in the future. To address this, the paper introduces the `Proliferation' paradigm, which anticipates the rise of smaller, decentralized, open-sourced AI models which are easier to augment, and easier to train without being detected. It posits that these developments are both probable and likely to introduce both benefits and novel risks that are difficult to mitigate through existing governance mechanisms. The final section explores governance strategies to address these risks, focusing on access governance, decentralized compute oversight, and information security. Whilst these strategies offer potential solutions, the paper acknowledges their limitations and cautions developers to weigh benefits against developments that could lead to a `vulnerable world'.
翻译:本文认为,当前用于缓解人工智能系统风险的治理机制建立在“大计算”范式之上——即一套关于人工智能能力与基础设施关系的假设——而这些假设在未来可能不再成立。为应对这一问题,本文引入“扩散”范式,该范式预见了更小型、去中心化、开源的人工智能模型的兴起,这类模型更易于增强,且更易于在未被察觉的情况下进行训练。文章指出,这些发展既具有可能性,也可能带来现有治理机制难以缓解的益处与新型风险。最后部分探讨了应对这些风险的治理策略,重点关注访问治理、去中心化计算监督与信息安全。尽管这些策略提供了潜在的解决方案,但本文承认其局限性,并警示开发者需权衡收益与可能导致“脆弱世界”的发展趋势。