Implementing convolutional neural networks (CNNs) on field-programmable gate arrays (FPGAs) has emerged as a promising alternative to GPUs, offering lower latency, greater power efficiency and greater flexibility. However, this development remains complex due to the hardware knowledge required and the long synthesis, placement and routing stages, which slow down design cycles and prevent rapid exploration of network configurations, making resource optimisation under severe constraints particularly challenging. This paper proposes a library of configurable convolution Blocks designed to optimize FPGA implementation and adapt to available resources. It also presents a methodological framework for developing mathematical models that predict FPGA resources utilization. The approach is validated by analyzing the correlation between the parameters, followed by error metrics. The results show that the designed blocks enable adaptation of convolution layers to hardware constraints, and that the models accurately predict resource consumption, providing a useful tool for FPGA selection and optimized CNN deployment.
翻译:在可编程门阵列(FPGA)上实现卷积神经网络(CNN)已成为GPU之外极具前景的替代方案,其具备更低延迟、更高能效及更强灵活性等优势。然而,由于需要硬件专业知识以及冗长的综合、布局与布线阶段,此类开发仍显复杂,这不仅延缓了设计周期,还阻碍了对网络配置的快速探索,使得在严格约束条件下的资源优化尤为困难。本文提出一种可配置卷积模块库,旨在优化FPGA实现并适配可用资源。同时,本文提出用于开发预测FPGA资源利用率的数学模型的方法学框架。通过分析参数间相关性并结合误差度量指标,该方法的有效性得到验证。结果表明:所设计的模块能够使卷积层适配硬件约束,且所建模型可精准预测资源消耗,为FPGA选型与CNN优化部署提供了实用工具。