The world has recently witnessed an unprecedented acceleration in demands for Machine Learning and Artificial Intelligence applications. This spike in demand has imposed tremendous strain on the underlying technology stack in supply chain, GPU-accelerated hardware, software, datacenter power density, and energy consumption. If left on the current technological trajectory, future demands show insurmountable spending trends, further limiting market players, stifling innovation, and widening the technology gap. To address these challenges, we propose a fundamental change in the AI training infrastructure throughout the technology ecosystem. The changes require advancements in supercomputing and novel AI training approaches, from high-end software to low-level hardware, microprocessor, and chip design, while advancing the energy efficiency required by a sustainable infrastructure. This paper presents the analytical framework that quantitatively highlights the challenges and points to the opportunities to reduce the barriers to entry for training large language models.
翻译:全球近期对机器学习和人工智能应用的需求呈空前加速增长态势。这一需求激增对供应链、GPU加速硬件、软件、数据中心功率密度及能耗等底层技术栈造成了巨大压力。若延续现有技术轨迹,未来需求将导致不可持续的开支趋势,进一步限制市场参与者、遏制创新并扩大技术鸿沟。为应对这些挑战,我们提出在整个技术生态系统中对AI训练基础设施进行根本性变革。这些变革需要从高端软件到底层硬件、微处理器及芯片设计的超级计算与新型AI训练方法协同发展,同时推进可持续基础设施所需的能效提升。本文构建了量化分析框架,系统揭示现有挑战并指明降低大语言模型训练门槛的机遇。