Intent inferral, the process by which a robotic device predicts a user's intent from biosignals, offers an effective and intuitive way to control wearable robots. Classical intent inferral methods treat biosignal inputs as unidirectional ground truths for training machine learning models, where the internal state of the model is not directly observable by the user. In this work, we propose reciprocal learning, a bidirectional paradigm that facilitates human adaptation to an intent inferral classifier. Our paradigm consists of iterative, interwoven stages that alternate between updating machine learning models and guiding human adaptation with the use of augmented visual feedback. We demonstrate this paradigm in the context of controlling a robotic hand orthosis for stroke, where the device predicts open, close, and relax intents from electromyographic (EMG) signals and provides appropriate assistance. We use LED progress-bar displays to communicate to the user the predicted probabilities for open and close intents by the classifier. Our experiments with stroke subjects show reciprocal learning improving performance in a subset of subjects (two out of five) without negatively impacting performance on the others. We hypothesize that, during reciprocal learning, subjects can learn to reproduce more distinguishable muscle activation patterns and generate more separable biosignals.
翻译:意图推断是指机器人设备通过生物信号预测用户意图的过程,为控制可穿戴机器人提供了一种有效且直观的方式。经典的意图推断方法将生物信号输入视为训练机器学习模型的单向真实标签,而模型的内部状态无法被用户直接观察。本研究提出交互学习这一双向范式,旨在促进人类对意图推断分类器的适应。该范式包含迭代交织的阶段,交替进行机器学习模型更新与基于增强视觉反馈的人类适应性引导。我们在控制用于中风康复的机器人手部矫形器场景中验证了该范式:设备通过肌电信号预测张开、闭合及放松意图,并提供相应辅助。我们采用LED进度条显示器向用户传达分类器对张开与闭合意图的预测概率。针对中风受试者的实验表明,交互学习在部分受试者中提升了性能(五分之二),且未对其他受试者产生负面影响。我们假设在交互学习过程中,受试者能够学会产生更具区分度的肌肉激活模式,并生成更易分离的生物信号。