In recent years, Human-centric cyber-physical systems have increasingly involved artificial intelligence to enable knowledge extraction from sensor-collected data. Examples include medical monitoring and control systems, as well as autonomous cars. Such systems are intended to operate according to the protocols and guidelines for regular system operations. However, in many scenarios, such as closed-loop blood glucose control for Type 1 diabetics, self-driving cars, and monitoring systems for stroke diagnosis. The operations of such AI-enabled human-centric applications can expose them to cases for which their operational mode may be uncertain, for instance, resulting from the interactions with a human with the system. Such cases, in which the system is in uncertain conditions, can violate the system's safety and security requirements. This paper will discuss operational deviations that can lead these systems to operate in unknown conditions. We will then create a framework to evaluate different strategies for ensuring the safety and security of AI-enabled human-centric cyber-physical systems in operation deployment. Then, as an example, we show a personalized image-based novel technique for detecting the non-announcement of meals in closed-loop blood glucose control for Type 1 diabetics.
翻译:近年来,以人为中心的信息物理系统越来越多地引入人工智能,以实现从传感器采集数据中提取知识。实例包括医疗监测与控制系统以及自动驾驶汽车。此类系统旨在按照常规系统运行的协议与准则进行操作。然而,在许多场景中,例如针对1型糖尿病患者的闭环血糖控制、自动驾驶汽车以及用于中风诊断的监测系统,此类人工智能赋能的以人为中心的应用在运行过程中可能面临其运行模式不确定的情况,例如源于人与系统交互所导致的情形。此类系统处于不确定条件下的情况可能违反系统的安全与防护要求。本文将讨论可能导致这些系统在未知条件下运行的运行偏差。随后,我们将构建一个框架,用以评估在运行部署中确保人工智能赋能的以人为中心的信息物理系统安全与防护的不同策略。最后,作为一个示例,我们展示一种基于个性化图像的新颖技术,用于检测1型糖尿病患者闭环血糖控制中未告知进餐的情况。