Emerging memory technologies have gained significant attention as a promising pathway to overcome the limitations of conventional computing architectures in deep learning applications. By enabling computation directly within memory, these technologies - built on nanoscale devices with tunable and nonvolatile conductance - offer the potential to drastically reduce energy consumption and latency compared to traditional von Neumann systems. This paper introduces XBTorch (short for CrossBarTorch), a novel simulation framework that integrates seamlessly with PyTorch and provides specialized tools for accurately and efficiently modeling crossbar-based systems based on emerging memory technologies. Through detailed comparisons and case studies involving hardware-aware training and inference, we demonstrate how XBTorch offers a unified interface for key research areas such as device-level modeling, cross-layer co-design, and inference-time fault tolerance. While exemplar studies utilize ferroelectric field-effect transistor (FeFET) models, the framework remains technology-agnostic - supporting other emerging memories such as resistive RAM (ReRAM), as well as enabling user-defined custom device models. The code is publicly available at: https://github.com/ADAM-Lab-GW/xbtorch
翻译:新兴存储技术作为克服传统计算架构在深度学习应用中局限性的有前景途径,已获得广泛关注。这些技术基于具有可调谐非易失电导特性的纳米级器件,通过在存储器内部直接进行计算,相比传统冯·诺依曼体系结构有望显著降低能耗与延迟。本文提出XBTorch(CrossBarTorch的简称),这是一个与PyTorch无缝集成的新型仿真框架,为基于新兴存储技术的交叉阵列系统提供了精确高效的专用建模工具。通过包含硬件感知训练与推理的详细对比与案例研究,我们展示了XBTorch如何为器件级建模、跨层协同设计和推理容错等关键研究领域提供统一接口。虽然示例研究采用铁电场效应晶体管(FeFET)模型,但该框架保持技术无关性——支持阻变存储器(ReRAM)等其他新兴存储技术,并允许用户自定义器件模型。代码公开于:https://github.com/ADAM-Lab-GW/xbtorch