This paper introduces the concept of augmented conversation, which aims to support co-located in-person conversations via embedded speech-driven on-the-fly referencing in augmented reality (AR). Today computing technologies like smartphones allow quick access to a variety of references during the conversation. However, these tools often create distractions, reducing eye contact and forcing users to focus their attention on phone screens and manually enter keywords to access relevant information. In contrast, AR-based on-the-fly referencing provides relevant visual references in real-time, based on keywords extracted automatically from the spoken conversation. By embedding these visual references in AR around the conversation partner, augmented conversation reduces distraction and friction, allowing users to maintain eye contact and supporting more natural social interactions. To demonstrate this concept, we developed \system, a Hololens-based interface that leverages real-time speech recognition, natural language processing and gaze-based interactions for on-the-fly embedded visual referencing. In this paper, we explore the design space of visual referencing for conversations, and describe our our implementation -- building on seven design guidelines identified through a user-centered design process. An initial user study confirms that our system decreases distraction and friction in conversations compared to smartphone searches, while providing highly useful and relevant information.
翻译:本文提出增强对话的概念,旨在通过增强现实(AR)中嵌入式语音驱动的即时引用技术,支持共处一地的面对面交流。当前,智能手机等计算设备允许用户在对话过程中快速获取各类参考信息。然而,这些工具常导致注意力分散,减少眼神交流,并迫使用户将注意力集中于手机屏幕,且需手动输入关键词以获取相关信息。相比之下,基于AR的即时引用技术通过自动提取对话语音中的关键词,实时提供相关视觉参考信息。通过将这些视觉参考嵌入AR环境并环绕对话者呈现,增强对话减少了干扰与操作摩擦,使用户能够保持眼神接触,支持更自然的社交互动。为验证此概念,我们开发了\system——一个基于Hololens的交互界面,该系统整合实时语音识别、自然语言处理及基于凝视的交互技术,实现嵌入式视觉即时引用。本文探讨了对话场景中视觉引用的设计空间,并基于以用户为中心的设计流程所确定的七项设计准则,详细阐述了系统实现方案。初步用户研究表明,与智能手机搜索相比,该系统显著降低了对话过程中的干扰与操作摩擦,同时提供了高度实用且相关的信息。