The deployment of autonomous systems has experienced remarkable growth in recent years, driven by their integration into sectors such as industry, medicine, logistics, and domestic environments. This expansion is accompanied by a series of security issues that entail significant risks due to the critical nature of autonomous systems, especially those operating in human-interaction environments. Furthermore, technological advancement and the high operational and architectural complexity of autonomous systems have resulted in an increased attack surface. This article presents a specific security auditing procedure for autonomous systems, based on a layer-structured methodology, a threat taxonomy adapted to the robotic context, and a set of concrete mitigation measures. The validity of the proposed approach is demonstrated through four practical case studies applied to representative robotic platforms: the Vision 60 military quadruped from Ghost Robotics, the A1 robot from Unitree Robotics, the UR3 collaborative arm from Universal Robots, and the Pepper social robot from Aldebaran Robotics.
翻译:近年来,自主系统的部署经历了显著增长,这得益于其在工业、医疗、物流及家庭环境等领域的广泛应用。然而,这种扩展伴随着一系列安全问题,由于自主系统(尤其是在人机交互环境中运行的系统)的关键性质,这些问题带来了重大风险。此外,技术进步以及自主系统的高操作与架构复杂性导致了攻击面的扩大。本文提出了一种针对自主系统的特定安全审计规程,该方法基于分层结构方法论、适应机器人环境的威胁分类法以及一套具体的缓解措施。通过应用于四个代表性机器人平台的实践案例研究,验证了所提方法的有效性:Ghost Robotics的Vision 60军用四足机器人、Unitree Robotics的A1机器人、Universal Robots的UR3协作机械臂以及Aldebaran Robotics的Pepper社交机器人。