The emerging need for fast and power-efficient AI/ML deployment on-board spacecraft has forced the space industry to examine specialized accelerators, which have been successfully used in terrestrial applications. Towards this direction, the current work introduces a very heterogeneous co-processing architecture that is built around UltraScale+ MPSoC and its programmable DPU, as well as commercial AI/ML accelerators such as MyriadX VPU and Edge TPU. The proposed architecture, called MPAI, handles networks of different size/complexity and accommodates speed-accuracy-energy trade-offs by exploiting the diversity of accelerators in precision and computational power. This brief provides technical background and reports preliminary experimental results and outcomes.
翻译:随着航天器上对快速且高能效的人工智能/机器学习部署需求日益增长,航天工业界已开始探索专门用于星载环境的加速器技术,这类加速器已在地面应用中取得显著成功。为此,本研究提出一种高度异构的协同处理架构,其核心基于UltraScale+多处理器片上系统及其可编程深度学习处理器单元,并整合了如MyriadX视觉处理器单元与Edge TPU等商用人工智能/机器学习加速器。该架构命名为MPAI,能够处理不同规模/复杂度的神经网络,并通过利用各加速器在计算精度与算力方面的差异性,实现速度-精度-能耗的协同优化。本文简要介绍了相关技术背景,并汇报了初步的实验结果与性能评估。