Facial expression recognition plays an important role in human behaviour, communication, and interaction. Recent neural networks have demonstrated to perform well at its automatic recognition, with different explainability techniques available to make them more transparent. In this work, we propose a facial expression recognition study for people with intellectual disabilities that would be integrated into a social robot. We train two well-known neural networks with five databases of facial expressions and test them with two databases containing people with and without intellectual disabilities. Finally, we study in which regions the models focus to perceive a particular expression using two different explainability techniques: LIME and RISE, assessing the differences when used on images containing disabled and non-disabled people.
翻译:面部表情识别在人类行为、交流与互动中扮演着重要角色。近年来,神经网络已证明在自动识别方面表现出色,同时多种可解释性技术可用于提升其透明度。本研究提出一项针对智力障碍人群的面部表情识别研究,该研究将集成至社交机器人中。我们利用五个面部表情数据库训练两种经典神经网络,并使用两个包含智力障碍人群与非智力障碍人群的数据库进行测试。最终,我们通过两种可解释性技术——LIME与RISE——探究模型关注哪些区域以感知特定表情,并评估其在包含残障与非残障人群图像上的差异。