Shared control improves Human-Robot Interaction by reducing the user's workload and increasing the robot's autonomy. It allows robots to perform tasks under the user's supervision. Current eye-tracking-driven approaches face several challenges. These include accuracy issues in 3D gaze estimation and difficulty interpreting gaze when differentiating between multiple tasks. We present an eye-tracking-driven control framework, aimed at enabling individuals with severe physical disabilities to perform daily tasks independently. Our system uses task pictograms as fiducial markers combined with a feature matching approach that transmits data of the selected object to accomplish necessary task related measurements with an eye-in-hand configuration. This eye-tracking control does not require knowledge of the user's position in relation to the object. The framework correctly interpreted object and task selection in up to 97.9% of measurements. Issues were found in the evaluation, that were improved and shared as lessons learned. The open-source framework can be adapted to new tasks and objects due to the integration of state-of-the-art object detection models.
翻译:共享控制通过降低用户工作负荷并增强机器人自主性来改善人机交互,使机器人能够在用户监督下执行任务。当前的眼动追踪驱动方法面临若干挑战,包括三维注视点估计的精度问题,以及在区分多重任务时难以解析注视意图。本文提出一种眼动追踪驱动的控制框架,旨在帮助重度肢体残疾人士独立完成日常任务。该系统采用任务图示作为基准标记,结合特征匹配方法,通过眼随手动配置将选定物体的数据传输至机械臂,以完成必要的任务相关测量。该眼动追踪控制无需预知用户与物体的相对位置。实验表明,该框架在高达97.9%的测量案例中能准确解析物体与任务选择。评估中发现的问题已通过改进形成经验总结。由于集成了前沿物体检测模型,该开源框架可适配新的任务与物体。