Speaking aloud to a wearable AR assistant in public can be socially awkward, and re-articulating the same requests every day creates unnecessary effort. We present SpeechLess, a wearable AR assistant that introduces a speech-based intent granularity control paradigm grounded in personalized spatial memory. SpeechLess helps users "speak less," while still obtaining the information they need, and supports gradual explicitation of intent when more complex expression is required. SpeechLess binds prior interactions to multimodal personal context-space, time, activity, and referents-to form spatial memories, and leverages them to extrapolate missing intent dimensions from under-specified user queries. This enables users to dynamically adjust how explicitly they express their informational needs, from full-utterance to micro/zero-utterance interaction. We motivate our design through a week-long formative study using a commercial smart glasses platform, revealing discomfort with public voice use, frustration with repetitive speech, and hardware constraints. Building on these insights, we design SpeechLess, and evaluate it through controlled lab and in-the-wild studies. Our results indicate that regulated speech-based interaction, can improve everyday information access, reduce articulation effort, and support socially acceptable use without substantially degrading perceived usability or intent resolution accuracy across diverse everyday environments.
翻译:在公共场合向可穿戴AR助手大声说话可能引发社交尴尬,且每日重复表达相同请求会造成不必要的精力消耗。本文提出SpeechLess——一种基于个性化空间记忆的语音意图粒度控制范式的可穿戴AR助手。SpeechLess帮助用户“少说话”的同时仍能获取所需信息,并在需要更复杂表达时支持意图的渐进显式化。该系统将先前的交互与多模态个人上下文(空间、时间、活动及指称对象)绑定以构建空间记忆,并利用这些记忆从用户欠明确的查询中推断缺失的意图维度。这使得用户能够动态调整信息需求的表达显式程度,实现从完整话语到微话语/零话语的交互谱系。我们通过基于商用智能眼镜平台开展的为期一周的形成性研究论证设计动机,揭示了公众语音使用的不适感、重复性表达的挫败感及硬件限制等问题。基于这些发现,我们设计了SpeechLess系统,并通过受控实验室实验和野外研究进行评估。结果表明:经过调节的语音交互能改善日常信息获取效率,降低表达负担,支持社交可接受的使用方式,且在不同日常环境中不会显著降低感知可用性或意图解析准确性。