Almost in every heavily computation-dependent application, from 6G communication systems to autonomous driving platforms, a large portion of computing should be near to the client side. Edge computing (AI at Edge) in mobile devices is one of the optimized approaches for addressing this requirement. Therefore, in this work, the possibilities and challenges of implementing a low-latency and power-optimized smart mobile system are examined. Utilizing Field Programmable Gate Array (FPGA) based solutions at the edge will lead to bandwidth-optimized designs and as a consequence can boost the computational effectiveness at a system-level deadline. Moreover, various performance aspects and implementation feasibilities of Neural Networks (NNs) on both embedded FPGA edge devices (using Xilinx Multiprocessor System on Chip (MPSoC)) and Cloud are discussed throughout this research. The main goal of this work is to demonstrate a hybrid system that uses the deep learning programmable engine developed by Xilinx Inc. as the main component of the hardware accelerator. Then based on this design, an efficient system for mobile edge computing is represented by utilizing an embedded solution.
翻译:在几乎所有高度依赖计算的应用中,从6G通信系统到自动驾驶平台,大部分计算都应靠近客户端。移动设备中的边缘计算(边缘AI)是满足这一需求的优化方法之一。因此,本研究探讨了实现低延迟与功耗优化的智能移动系统的可能性与挑战。在边缘采用基于现场可编程门阵列(FPGA)的解决方案将实现带宽优化设计,从而提升系统级截止时间内的计算效率。此外,本研究全面讨论了神经网络(NN)在嵌入式FPGA边缘设备(使用Xilinx多处理器片上系统(MPSoC))与云平台上的各项性能表现及实施可行性。本工作的主要目标是展示一种混合系统,该系统以赛灵思公司开发的深度学习可编程引擎作为硬件加速器的核心组件,并基于此设计,通过嵌入式解决方案构建高效的移动边缘计算系统。