Robotic teaching assistants (TAs) often use body-mounted screens to deliver content. In nomadic, walk-and-talk learning, such as tours in makerspaces, these screens can distract learners from real-world objects, increasing extraneous cognitive load. HCI research lacks empirical comparisons of potential alternatives, such as robots with in-situ projection versus screen-based counterparts; little knowledge has been derived for designing such alternatives. We introduce ProjecTA, a semi-humanoid, gesture-capable TA that guides learners while projecting near-object overlays coordinated with speech and gestures. In a mixed-method study (N=24) in a university makerspace, ProjecTA significantly reduced extraneous load and outperformed its screen-based counterpart in perceived usability, usefulness of visual display, and cross-modal complementarity. Qualitative analyses revealed how ProjecTA's coordinated projections, gestures, and speech anchored explanations in place and time, enhancing understanding in ways a screen could not. We derive key design implications for future robotic TAs leveraging spatial projection to support mobile learning in physical environments.
翻译:机器人教学助手通常使用机身安装的屏幕来传递内容。在诸如创客空间导览等移动式、边走边谈的学习场景中,这些屏幕可能会分散学习者对现实世界物体的注意力,从而增加额外的认知负荷。人机交互研究缺乏对潜在替代方案的实证比较,例如具备原位投影能力的机器人与基于屏幕的对应方案;关于如何设计此类替代方案的知识也甚少。我们介绍了ProjecTA,一种具备手势能力的半人形教学助手,它能在引导学习者的同时,在物体附近投影与语音和手势协调的叠加内容。在一项于大学创客空间进行的混合方法研究(N=24)中,ProjecTA显著降低了额外认知负荷,并在感知可用性、视觉显示的有用性以及跨模态互补性方面优于其基于屏幕的对应方案。定性分析揭示了ProjecTA协调的投影、手势和语音如何将解释锚定在特定的地点和时间,以屏幕无法实现的方式增强了理解。我们为未来利用空间投影来支持物理环境中移动学习的机器人教学助手提出了关键的设计启示。