Machine learning models are commonly tested in-distribution (same dataset); performance almost always drops in out-of-distribution settings. For HRI research, the goal is often to develop generalized models. This makes domain generalization - retaining performance in different settings - a critical issue. In this study, we present a concise analysis of domain generalization in failure detection models trained on human facial expressions. Using two distinct datasets of humans reacting to videos where error occurs, one from a controlled lab setting and another collected online, we trained deep learning models on each dataset. When testing these models on the alternate dataset, we observed a significant performance drop. We reflect on the causes for the observed model behavior and leave recommendations. This work emphasizes the need for HRI research focusing on improving model robustness and real-life applicability.
翻译:机器学习模型通常在同分布(同一数据集)下进行测试;而在分布外场景下,性能几乎总会下降。对于人机交互研究而言,开发具有泛化能力的模型常是核心目标,这使得领域泛化(即在不同场景下保持性能)成为关键问题。本研究对人类面部表情训练的故障检测模型中的领域泛化问题进行了简洁分析。我们使用两个不同的人类对错误发生视频做出反应的数据集——一个来自受控实验室环境,另一个来自线上采集——分别在每个数据集上训练深度学习模型。当使用另一个数据集对这些模型进行测试时,我们观察到性能显著下降。我们反思了所观察到的模型行为背后的原因并提出了建议。此项工作强调了人机交互研究需聚焦于提升模型鲁棒性与现实应用适应性的必要性。