Many leading AI researchers expect AI development to exceed the transformative impact of all previous technological revolutions. This belief is based on the idea that AI will be able to automate the process of AI research itself, leading to a positive feedback loop. In August and September of 2025, we interviewed 25 leading researchers from frontier AI labs and academia, including participants from Google DeepMind, OpenAI, Anthropic, Meta, UC Berkeley, Princeton, and Stanford to understand researcher perspectives on these scenarios. Though AI systems have not yet been able to recursively improve, 20 of the 25 researchers interviewed identified automating AI research as one of the most severe and urgent AI risks. Participants converged on predictions that AI agents will become more capable at coding, math and eventually AI development, gradually transitioning from `assistants' or `tools' to `autonomous AI developers,' after which point, predictions diverge. While researchers agreed upon the possibility of recursive improvement, they disagreed on basic questions of timelines or appropriate governance mechanisms. For example, an epistemic divide emerged between frontier lab researchers and academic researchers, the latter of which expressed more skepticism about explosive growth scenarios. Additionally, 17/25 participants expected AI systems with advanced coding or R&D capabilities to be increasingly reserved for internal use at AI companies or governments, unseen by the public. Participants were split as to whether setting regulatory ``red lines" was a good idea, though almost all favored transparency-based mitigations.
翻译:许多顶尖人工智能研究者预计,人工智能的发展将超越以往所有技术革命带来的变革性影响。这一观点基于以下理念:人工智能将能够自动化人工智能研究过程本身,从而形成正向反馈循环。为理解研究者对这些情境的看法,我们在2025年8月至9月期间访谈了来自前沿人工智能实验室和学术机构的25位领军研究者,受访者包括来自Google DeepMind、OpenAI、Anthropic、Meta、加州大学伯克利分校、普林斯顿大学和斯坦福大学的参与者。尽管当前人工智能系统尚未实现递归式自我改进,但25位受访者中有20位将自动化人工智能研究列为最严峻且紧迫的人工智能风险之一。受访者普遍预测,人工智能代理将在编码、数学乃至最终的人工智能研发方面展现出越来越强的能力,逐步从"助手"或"工具"转变为"自主的人工智能开发者",但在此之后的具体发展路径预测则出现分歧。虽然研究者们认同递归改进的可能性,但在时间线预测和适当治理机制等基本问题上存在分歧。例如,前沿实验室研究者与学术研究者之间出现了认知鸿沟,后者对智能爆炸式增长情境表现出更多怀疑态度。此外,25位参与者中有17位预计,具备先进编码或研发能力的人工智能系统将越来越多地被保留在人工智能公司或政府内部使用,而不会向公众开放。关于设定监管"红线"是否可取的问题,参与者意见不一,但几乎所有人都支持基于透明度的缓解措施。