The dependability of AI models relies largely on the reliability of the underlying computation hardware. Hardware aging attacks can compromise the computing substrate and disrupt AI models over the long run. In this work, we present a new hardware aging attack that exploits commutative properties of addition to disrupt the multiply-and-add operation that forms the backbone of almost all AI models. By permuting the inputs of an adder, the attack preserves functional correctness while inducing unbalanced stress among transistors, accelerating delay degradation in the circuit. Unlike prior approaches that rely on input manipulation, additional trojan circuitry, etc., the proposed method incurs virtually no area or software overhead. Experimental results with two types of multipliers, different bit widths, a mix of AI models and datasets demonstrates that the proposed attack degrades inference accuracy by up to 64% in 4 years, posing a significant threat to AI accelerators. The attack can also be extended to arithmetic units of general-purpose processors.
翻译:暂无翻译