As neural computation is revolutionizing the field of Artificial Intelligence (AI), rethinking the ideal neural hardware is becoming the next frontier. Fast and reliable von Neumann architecture has been the hosting platform for neural computation. Although capable, its separation of memory and computation creates the bottleneck for the energy efficiency of neural computation, contrasting the biological brain. The question remains: how can we efficiently combine memory and computation, while exploiting the physics of the substrate, to build intelligent systems? In this thesis, I explore an alternative way with memristive devices for neural computation, where the unique physical dynamics of the devices are used for inference, learning and routing. Guided by the principles of gradient-based learning, we selected functions that need to be materialized, and analyzed connectomics principles for efficient wiring. Despite non-idealities and noise inherent in analog physics, I will provide hardware evidence of adaptability of local learning to memristive substrates, new material stacks and circuit blocks that aid in solving the credit assignment problem and efficient routing between analog crossbars for scalable architectures.
翻译:随着神经计算正在彻底改变人工智能领域,重新思考理想的神经硬件正成为下一个前沿。快速可靠的冯·诺依曼架构一直是神经计算的主要承载平台。尽管功能强大,但其内存与计算的分离造成了神经计算能效的瓶颈,这与生物大脑形成鲜明对比。核心问题在于:如何有效结合内存与计算,同时利用基底材料的物理特性来构建智能系统?本论文探索了基于忆阻器件的神经计算替代方案,利用器件独特的物理动力学特性实现推理、学习和路由功能。在基于梯度的学习原理指导下,我们筛选出需要实体化的函数,并分析了高效连接的连接组学原理。尽管模拟物理存在固有的非理想性和噪声,我将提供硬件证据,证明局部学习能够适应忆阻基底,并通过新型材料堆叠和电路模块助力解决信用分配问题,实现可扩展架构中模拟交叉阵列间的高效路由。