The ability to display rich facial expressions is crucial for human-like robotic heads. While manually defining such expressions is intricate, there already exist approaches to automatically learn them. In this work one such approach is applied to evaluate and control a robot head different from the one in the original study. To improve the mapping of facial expressions from human actors onto a robot head, it is proposed to use 3D landmarks and their pairwise distances as input to the learning algorithm instead of the previously used facial action units. Participants of an online survey preferred mappings from our proposed approach in most cases, though there are still further improvements required.
翻译:展现丰富面部表情的能力对于类人机器人头部至关重要。虽然手动定义此类表情十分复杂,但目前已存在自动学习表情的方法。本工作应用其中一种方法来评估和控制一个不同于原始研究中使用的机器人头部。为了改进从人类演员到机器人头部的面部表情映射,本文提出使用三维关键点及其两两距离作为学习算法的输入,以替代先前使用的面部动作单元。在线调查的参与者在大多数情况下更倾向于我们提出方法生成的映射,尽管仍需进一步的改进。