Artificial intelligence (AI) is advancing exponentially and is likely to have profound impacts on human wellbeing, social equity, and environmental sustainability. Here we argue that the "alignment problem" in AI research is also an economic alignment problem, as developing advanced AI within a growth-oriented economic system is likely to increase social, environmental, and existential risks. We show that post-growth research offers concepts and policies that could address the economic alignment problem and substantially reduce AI risks, such as by replacing optimisation with satisficing, using the Doughnut of social and planetary boundaries to guide development, and curbing systemic rebound with resource caps. We propose governance and business reforms that treat AI as a commons and prioritise tool-like autonomy-enhancing systems over agentic AI. Finally, we argue that the development of artificial general intelligence (AGI) requires new economic theories and models, for which post-growth scholarship provides a strong foundation.
翻译:人工智能(AI)正以指数级速度发展,可能对人类福祉、社会公平和环境可持续性产生深远影响。本文认为,AI 研究中的“对齐问题”本质上也是一个经济对齐问题,因为以增长为导向的经济体系中开发先进 AI,可能会增加社会、环境和生存风险。我们论证,后增长研究提供了能够应对经济对齐问题并大幅降低 AI 风险的概念与政策,例如以满意化取代最优化、利用社会与地球边界的甜甜圈模型引导发展、通过资源上限遏制系统性反弹效应。我们提出将 AI 视为公共物品,并优先发展工具型自主增强系统而非代理型 AI 的治理与商业改革方案。最后,我们认为,开发通用人工智能需要新的经济理论与模型,而后增长学术体系为此奠定了坚实基础。