This article addresses the societal costs associated with the lack of regulation in Artificial Intelligence and proposes a framework combining innovation and regulation. Over fifty years of AI research, catalyzed by declining computing costs and the proliferation of data, have propelled AI into the mainstream, promising significant economic benefits. Yet, this rapid adoption underscores risks, from bias amplification and labor disruptions to existential threats posed by autonomous systems. The discourse is polarized between accelerationists, advocating for unfettered technological advancement, and doomers, calling for a slowdown to prevent dystopian outcomes. This piece advocates for a middle path that leverages technical innovation and smart regulation to maximize the benefits of AI while minimizing its risks, offering a pragmatic approach to the responsible progress of AI technology. Technical invention beyond the most capable foundation models is needed to contain catastrophic risks. Regulation is required to create incentives for this research while addressing current issues.
翻译:本文探讨了人工智能缺乏监管所带来的社会成本,并提出一个结合创新与监管的框架。在计算成本下降和数据激增的推动下,超过五十年的AI研究已将其推向主流,预示着巨大的经济效益。然而,这种快速普及也凸显了诸多风险,从偏见放大和劳动力市场动荡,到自主系统构成的生存性威胁。当前舆论在加速主义者(主张不受限制的技术进步)与末日论者(呼吁减缓发展以避免反乌托邦后果)之间两极分化。本文主张一条中间道路,即利用技术创新与智能监管,在最大化AI效益的同时最小化其风险,为AI技术的负责任发展提供一种务实路径。我们需要超越现有最强大基础模型的技术发明,以遏制灾难性风险。同时,监管是必要的,它既能为此类研究创造激励,又能解决当前存在的问题。