Self-adaptive robots adjust their behaviors in response to unpredictable environmental changes. These robots often incorporate deep learning (DL) components into their software to support functionality such as perception, decision-making, and control, enhancing autonomy and self-adaptability. However, the inherent uncertainty of DL-enabled software makes it challenging to ensure its dependability in dynamic environments. Consequently, test generation techniques have been developed to test robot software, and classical mutation analysis injects faults into the software to assess the test suite's effectiveness in detecting the resulting failures. However, there is a lack of mutation analysis techniques to assess the effectiveness under the uncertainty inherent to DL-enabled software. To this end, we propose UAMTERS, an uncertainty-aware mutation analysis framework that introduces uncertainty-aware mutation operators to explicitly inject stochastic uncertainty into DL-enabled robotic software, simulating uncertainty in its behavior. We further propose mutation score metrics to quantify a test suite's ability to detect failures under varying levels of uncertainty. We evaluate UAMTERS across three robotic case studies, demonstrating that UAMTERS more effectively distinguishes test suite quality and captures uncertainty-induced failures in DL-enabled software.
翻译:自适应机器人能够根据不可预测的环境变化调整其行为。这类机器人通常在其软件中集成深度学习组件,以支持感知、决策与控制等功能,从而增强自主性与自适应能力。然而,深度学习赋能软件固有的不确定性,使得在动态环境中确保其可靠性面临挑战。为此,测试生成技术已被开发用于测试机器人软件,而经典的变异分析通过向软件注入故障来评估测试套件检测由此引发失效的有效性。然而,目前缺乏能够评估深度学习赋能软件固有不确定性下测试有效性的变异分析技术。为此,我们提出UAMTERS,一种不确定性感知的变异分析框架,该框架引入不确定性感知变异算子,显式地将随机不确定性注入深度学习赋能的机器人软件中,以模拟其行为中的不确定性。我们进一步提出变异得分度量指标,用于量化测试套件在不同不确定性水平下检测失效的能力。我们在三个机器人案例研究中评估UAMTERS,结果表明UAMTERS能更有效地区分测试套件质量,并捕捉深度学习赋能软件中由不确定性引发的失效。