Printed Electronics (PE) stands out as a promisingtechnology for widespread computing due to its distinct attributes, such as low costs and flexible manufacturing. Unlike traditional silicon-based technologies, PE enables stretchable, conformal,and non-toxic hardware. However, PE are constrained by larger feature sizes, making it challenging to implement complex circuits such as machine learning (ML) classifiers. Approximate computing has been proven to reduce the hardware cost of ML circuits such as Multilayer Perceptrons (MLPs). In this paper, we maximize the benefits of approximate computing by integrating hardware approximation into the MLP training process. Due to the discrete nature of hardware approximation, we propose and implement a genetic-based, approximate, hardware-aware training approach specifically designed for printed MLPs. For a 5% accuracy loss, our MLPs achieve over 5x area and power reduction compared to the baseline while outperforming state of-the-art approximate and stochastic printed MLPs.
翻译:印刷电子(PE)因其低成本、柔性制造等独特属性,成为实现普适计算的一项前景广阔的技术。与传统硅基技术不同,PE能够实现可拉伸、共形且无毒的硬件。然而,PE受限于较大的特征尺寸,使得实现如机器学习(ML)分类器等复杂电路具有挑战性。近似计算已被证明能够降低如多层感知器(MLP)等机器学习电路的硬件成本。本文通过将硬件近似集成到MLP训练过程中,以最大化近似计算的效益。鉴于硬件近似的离散特性,我们提出并实现了一种专门为印刷MLP设计的、基于遗传算法的、近似的、硬件感知的训练方法。在精度损失5%的情况下,我们的MLP与基线相比实现了超过5倍的面积和功耗降低,同时性能优于最先进的近似及随机印刷MLP。