With the advent of high-speed, high-precision, and low-power mixed-signal systems, there is an ever-growing demand for accurate, fast, and energy-efficient analog-to-digital (ADCs) and digital-to-analog converters (DACs). Unfortunately, with the downscaling of CMOS technology, modern ADCs trade off speed, power and accuracy. Recently, memristive neuromorphic architectures of four-bit ADC/DAC have been proposed. Such converters can be trained in real-time using machine learning algorithms, to break through the speedpower-accuracy trade-off while optimizing the conversion performance for different applications. However, scaling such architectures above four bits is challenging. This paper proposes a scalable and modular neural network ADC architecture based on a pipeline of four-bit converters, preserving their inherent advantages in application reconfiguration, mismatch selfcalibration, noise tolerance, and power optimization, while approaching higher resolution and throughput in penalty of latency. SPICE evaluation shows that an 8-bit pipelined ADC achieves 0.18 LSB INL, 0.20 LSB DNL, 7.6 ENOB, and 0.97 fJ/conv FOM. This work presents a significant step towards the realization of large-scale neuromorphic data converters.
翻译:随着高速、高精度、低功耗混合信号系统的出现,对精确、快速且高能效的模数转换器(ADC)和数模转换器(DAC)的需求日益增长。然而,随着CMOS工艺尺寸的缩小,现代ADC往往需要在速度、功耗和精度之间进行权衡。最近,已有研究提出了基于忆阻神经形态架构的四位ADC/DAC。此类转换器可利用机器学习算法进行实时训练,从而突破速度-功耗-精度的权衡限制,并针对不同应用优化转换性能。然而,将此类架构扩展到四位以上具有挑战性。本文提出了一种基于四位转换器流水线的可扩展、模块化神经网络ADC架构,在保持其在应用重配置、失配自校准、噪声容限和功耗优化方面固有优势的同时,以增加延迟为代价,实现了更高的分辨率和吞吐量。SPICE仿真评估表明,一个8位流水线ADC实现了0.18 LSB的积分非线性、0.20 LSB的微分非线性、7.6位有效位数以及0.97 fJ/转换步的品质因数。这项工作为实现大规模神经形态数据转换器迈出了重要一步。