We investigate intelligent personal assistants (IPAs) accessibility for deaf and hard of hearing (DHH) people who can use their voice in everyday communication. The inability of IPAs to understand diverse accents including deaf speech renders them largely inaccessible to non-signing and speaking DHH individuals. Using an Echo Show, we compare the usability of natural language input via spoken English; with Alexa's automatic speech recognition and a Wizard-of-Oz setting with a trained facilitator re-speaking commands against that of a large language model (LLM)-assisted touch interface in a mixed-methods study. The touch method was navigated through an LLM-powered "task prompter," which integrated the user's history and smart environment to suggest contextually-appropriate commands. Quantitative results showed no significant differences across both spoken English conditions vs LLM-assisted touch. Qualitative results showed variability in opinions on the usability of each method. Ultimately, it will be necessary to have robust deaf-accented speech recognized natively by IPAs.
翻译:本研究探讨了能够在日常交流中使用语音的聋人与听力障碍者(DHH)对智能个人助理(IPAs)的可访问性。由于IPAs无法理解包括聋人语音在内的多样化口音,导致其对于非手语使用者且依赖口语的DHH个体基本不可用。通过使用Echo Show设备,我们在混合方法研究中比较了以下三种交互方式的可用性:通过英语口语的自然语言输入(包括Alexa自动语音识别系统及由训练有素的协助者复述指令的“绿野仙踪”模拟设置),以及基于大语言模型(LLM)辅助的触控界面。触控方式通过一个LLM驱动的“任务提示器”进行导航,该提示器整合用户历史与智能环境信息以推荐符合情境的指令。定量结果显示,两种英语口语条件与LLM辅助触控方案之间无显著差异。定性分析则显示参与者对不同方法的可用性评价存在差异。最终,实现IPAs原生支持对聋人口音的稳健语音识别仍是必要的发展方向。