The UK has pursued a distinctive path in AI regulation: less cautious than the EU but more willing to address risks than the US, and has emerged as a global leader in coordinating AI safety efforts. Impressive developments from companies like London-based DeepMind began to spark concerns in the UK about catastrophic risks from around 2012, although regulatory discussion at the time focussed on bias and discrimination. By 2022, these discussions had evolved into a "pro-innovation" strategy, in which the government directed existing regulators to take a light-touch approach, governing AI at point of use, but avoided regulating the technology or infrastructure directly. ChatGPT arrived in late 2022, galvanising concerns that this approach may be insufficient. The UK responded by establishing an AI Safety Institute to monitor risks and hosting the first international AI Safety Summit in 2023, but - unlike the EU - refrained from regulating frontier AI development in addition to its use. A new government was elected in 2024 which promised to address this gap, but at the time of writing is yet to do so. What should the UK do next? The government faces competing objectives: harnessing AI for economic growth and better public services while mitigating risk. In light of these, we propose establishing a flexible, principles-based regulator to oversee the most advanced AI development, defensive measures against risks from AI-enabled biological design tools, and argue that more technical work is needed to understand how to respond to AI-generated misinformation. We argue for updated legal frameworks on copyright, discrimination, and AI agents, and that regulators will have a limited but important role if AI substantially disrupts labour markets. If the UK gets AI regulation right, it could demonstrate how democratic societies can harness AI's benefits while managing its risks.
翻译:英国在人工智能监管方面选择了一条独特的道路:相较于欧盟更为审慎,但比美国更愿意应对风险,并已成为全球人工智能安全协调工作的领导者。自2012年左右起,DeepMind等伦敦公司的突破性进展开始引发英国对灾难性风险的担忧,尽管当时的监管讨论主要集中在偏见与歧视问题上。到2022年,这些讨论已演变为"促创新"战略,政府指导现有监管机构采取轻触式监管,在应用环节管理人工智能,但避免直接规制技术或基础设施。2022年末ChatGPT的出现激化了人们对这种监管方式可能不足的担忧。英国通过建立人工智能安全研究所监测风险,并于2023年主办首届全球人工智能安全峰会作为回应,但与欧盟不同,英国仍未对前沿人工智能的开发(除应用外)实施监管。2024年当选的新政府承诺填补这一监管空白,但截至本文撰写时尚未采取行动。英国下一步应如何行动?政府面临双重目标:既要利用人工智能促进经济增长和改善公共服务,又要降低风险。基于此,我们建议设立一个灵活的原则性监管机构监督最先进的人工智能开发,建立针对人工智能生物设计工具风险的防御措施,并指出需要更多技术研究来应对人工智能生成虚假信息问题。我们主张更新关于版权、歧视和人工智能代理的法律框架,并强调若人工智能严重冲击劳动力市场,监管机构将发挥有限但关键的作用。若英国能完善人工智能监管体系,或将示范民主社会如何在把握人工智能机遇的同时管控其风险。