Robotic teaching assistants (TAs) often use body-mounted screens to deliver content. In nomadic, walk-and-talk learning, such as tours in makerspaces, these screens can distract learners from real-world objects, increasing extraneous cognitive load. HCI research lacks empirical comparisons of potential alternatives, such as robots with in-situ projection versus screen-based counterparts; little knowledge has been derived for designing such alternatives. We introduce ProjecTA, a semi-humanoid, gesture-capable TA that guides learners while projecting near-object overlays coordinated with speech and gestures. In a mixed-method study (N=24) in a university makerspace, ProjecTA significantly reduced extraneous load and outperformed its screen-based counterpart in perceived usability, usefulness of visual display, and cross-modal complementarity. Qualitative analyses revealed how ProjecTA's coordinated projections, gestures, and speech anchored explanations in place and time, enhancing understanding in ways a screen could not. We derive key design implications for future robotic TAs leveraging spatial projection to support mobile learning in physical environments.
翻译:机器人教学助手(TA)通常使用机身安装的屏幕来传递内容。在诸如创客空间参观等移动式、边走边谈的学习场景中,这些屏幕可能会分散学习者对现实世界物体的注意力,从而增加外在认知负荷。人机交互研究缺乏对潜在替代方案(例如具备原位投影功能的机器人与基于屏幕的对应方案)的实证比较;关于如何设计此类替代方案的知识也较为匮乏。我们介绍了ProjecTA,这是一种具备手势能力的半人形教学助手,它能在引导学习者的同时,投影出与语音和手势协调的、靠近物体的叠加信息。在一项于大学创客空间进行的混合方法研究(N=24)中,ProjecTA显著降低了外在认知负荷,并在感知可用性、视觉显示的有用性以及跨模态互补性方面优于其基于屏幕的对应方案。定性分析揭示了ProjecTA如何通过协调的投影、手势和语音,将解释锚定在特定的地点和时间,从而以屏幕无法实现的方式增强理解。我们为未来利用空间投影来支持物理环境中移动学习的机器人教学助手,提出了关键的设计启示。