Vector multiplication is a fundamental operation for AI acceleration, responsible for over 85% of computational load in convolution tasks. While essential, these operations are primary drivers of area, power, and delay in modern datapath designs. Conventional multiplier architectures often force a compromise between latency and complexity: high-speed array multipliers demand significant power, whereas sequential designs offer efficiency at the cost of throughput. This paper presents a precompute-reuse nibble multiplier architecture that bridges this gap by reformulating multiplication as a structured composition of reusable nibble-level precomputed values. The proposed design treats each operand as an independent low-precision element, decomposes it into fixed-width nibbles, and generates scaled multiples of a broadcast operand using compact shift-add logic. By replacing wide lookup tables and multiway multiplexers with logic-based precomputation and regular accumulation, the architecture decouples cycle complexity from gate delay. The design completes each 8-bit multiplication in two deterministic cycles with a short critical path, scales efficiently across vector lanes, and significantly reduces area and energy consumption. RTL implementations synthesized in TSMC 28 nm technology demonstrate up to 1.69x area reduction and 1.63x power improvement over shift-add, and nearly 2.6x area and 2.7x power savings compared to LUT-based array multipliers at 128 bit scale.
翻译:向量乘法是人工智能加速的基础运算,在卷积任务中承担超过85%的计算负载。尽管不可或缺,这些操作却是现代数据通路设计中面积、功耗和延迟的主要来源。传统乘法器架构往往在延迟与复杂度之间被迫折衷:高速阵列乘法器功耗显著,而顺序设计虽能效较高却以吞吐量为代价。本文提出一种预计算复用的半字节乘法器架构,通过将乘法重构为可复用的半字节级预计算值的结构化组合,从而弥合上述差距。该设计将每个操作数视为独立的低精度元素,将其分解为固定宽度的半字节,并利用紧凑的移位-加法逻辑生成广播操作数的缩放倍数。通过用基于逻辑的预计算和规则累加替代宽位查找表与多路复用器,该架构实现了周期复杂度与门级延迟的解耦。该设计以较短的关键路径在两个确定周期内完成每次8位乘法,能效地跨向量通道扩展,并显著降低面积与能耗。基于TSMC 28 nm工艺综合的RTL实现表明:相较于移位-加法方案,该设计可实现高达1.69倍面积缩减与1.63倍功耗改进;在128位规模下,与基于查找表的阵列乘法器相比,可获得近2.6倍面积节约与2.7倍功耗降低。