Deep neural networks (DNNs) have become indispensable in many real-life applications like natural language processing, and autonomous systems. However, deploying DNNs on resource-constrained devices, e.g., in RISC-V platforms, remains challenging due to the high computational and memory demands of fully connected (FC) layers, which dominate resource consumption. Low-rank factorization (LRF) offers an effective approach to compressing FC layers, but the vast design space of LRF solutions involves complex trade-offs among FLOPs, memory size, inference time, and accuracy, making the LRF process complex and time-consuming. This paper introduces an end-to-end LRF design space exploration methodology and a specialized design tool for optimizing FC layers on RISC-V processors. Using Tensor Train Decomposition (TTD) offered by TensorFlow T3F library, the proposed work prunes the LRF design space by excluding first, inefficient decomposition shapes and second, solutions with poor inference performance on RISC-V architectures. Compiler optimizations are then applied to enhance custom T3F layer performance, minimizing inference time and boosting computational efficiency. On average, our TT-decomposed layers run 3x faster than IREE and 8x faster than Pluto on the same compressed model. This work provides an efficient solution for deploying DNNs on edge and embedded devices powered by RISC-V architectures.
翻译:深度神经网络(DNNs)在自然语言处理、自动驾驶系统等诸多现实应用中已成为不可或缺的技术。然而,在资源受限的设备(例如基于RISC-V的平台)上部署DNN仍然面临挑战,这主要源于全连接(FC)层高昂的计算与内存需求,其占据了资源消耗的主导地位。低秩分解(LRF)为压缩FC层提供了一种有效途径,但LRF解决方案的庞大设计空间涉及浮点运算次数、内存占用、推理时间与精度之间的复杂权衡,使得LRF过程复杂且耗时。本文提出了一种端到端的LRF设计空间探索方法,并开发了一款专用设计工具,用于在RISC-V处理器上优化FC层。利用TensorFlow T3F库提供的张量列车分解(TTD),本研究首先剔除了低效的分解形态,其次排除了在RISC-V架构上推理性能不佳的解决方案,从而对LRF设计空间进行了剪枝。随后应用编译器优化以提升定制T3F层的性能,最小化推理时间并提高计算效率。平均而言,在相同压缩模型上,我们经TT分解的层运行速度比IREE快3倍,比Pluto快8倍。本工作为在基于RISC-V架构的边缘与嵌入式设备上部署DNN提供了一种高效的解决方案。