The implementation of Hyperdimensional Computing (HDC) on In-Memory Computing (IMC) architectures faces significant challenges due to the mismatch between highdimensional vectors and IMC array sizes, leading to inefficient memory utilization and increased computation cycles. This paper presents MEMHD, a Memory-Efficient Multi-centroid HDC framework designed to address these challenges. MEMHD introduces a clustering-based initialization method and quantization aware iterative learning for multi-centroid associative memory. Through these approaches and its overall architecture, MEMHD achieves a significant reduction in memory requirements while maintaining or improving classification accuracy. Our approach achieves full utilization of IMC arrays and enables one-shot (or few-shot) associative search. Experimental results demonstrate that MEMHD outperforms state-of-the-art binary HDC models, achieving up to 13.69% higher accuracy with the same memory usage, or 13.25x more memory efficiency at the same accuracy level. Moreover, MEMHD reduces computation cycles by up to 80x and array usage by up to 71x compared to baseline IMC mapping methods when mapped to 128x128 IMC arrays, while significantly improving energy and computation cycle efficiency.
翻译:在内存计算架构上实现超维度计算面临重大挑战,主要源于高维向量与内存计算阵列尺寸之间的不匹配,这导致内存利用率低下且计算周期增加。本文提出MEMHD,一种内存高效的多质心超维度计算框架,旨在应对这些挑战。MEMHD引入了基于聚类的初始化方法以及面向多质心关联内存的量化感知迭代学习。通过这些方法及其整体架构,MEMHD在保持或提升分类精度的同时,显著降低了内存需求。我们的方法实现了内存计算阵列的完全利用,并支持单次(或少量次)关联搜索。实验结果表明,MEMHD优于最先进的二进制超维度计算模型,在相同内存使用量下精度最高提升13.69%,或在相同精度水平下内存效率提升13.25倍。此外,当映射到128x128内存计算阵列时,与基线内存计算映射方法相比,MEMHD将计算周期最多降低80倍,阵列使用量最多减少71倍,同时显著提升了能量效率和计算周期效率。