Blind people use artificial intelligence-enabled visual assistance technologies (AI VAT) to gain visual access in their everyday lives, but these technologies are embedded with errors that may be difficult to verify non-visually. Previous studies have primarily explored sighted users' understanding of AI output and created vision-dependent explainable AI (XAI) features. We extend this body of literature by conducting an in-depth qualitative study with 26 blind people to understand their verification experiences and preferences. We begin by describing errors blind people encounter, highlighting how AI VAT fails to support complex document layouts, diverse languages, and cultural artifacts. We then illuminate how blind people make sense of AI through experimenting with AI VAT, employing non-visual skills, strategically including sighted people, and cross-referencing with other devices. Participants provided detailed opportunities for designing accessible XAI, such as affordances to support contestation. Informed by disability studies framework of misfitting and fitting, we unpacked harmful assumptions with AI VAT, underscoring the importance of celebrating disabled ways of knowing. Lastly, we offer practical takeaways for Responsible AI practice to push the field of accessible XAI forward.
翻译:视障人士在日常生活中使用人工智能驱动的视觉辅助技术(AI VAT)来获取视觉信息,但这些技术存在错误,且难以通过非视觉方式进行验证。先前的研究主要关注视力正常用户对AI输出的理解,并创建了依赖视觉的可解释人工智能(XAI)功能。本研究通过深入访谈26位视障人士,了解其验证体验与偏好,拓展了该领域的研究。我们首先描述了视障人士遇到的AI错误,指出AI VAT在处理复杂文档布局、多语言及文化器物时存在不足。随后,我们揭示了视障人士如何通过实验性使用AI VAT、运用非视觉技能、策略性地引入视力协助者以及跨设备交叉验证等方式理解AI。参与者为设计无障碍XAI提供了具体建议,例如支持质疑功能的设计启示。借鉴残障研究中的“错配”与“适配”理论框架,我们剖析了AI VAT中存在的有害预设,强调了尊重残障认知方式的重要性。最后,我们为负责任AI实践提出实用建议,以推动无障碍XAI领域的发展。