Existing assistive technologies (AT) often adopt a one-size-fits-all approach, overlooking the diverse needs of people with visual impairments (PVI). Do-it-yourself AT (DIY-AT) toolkits offer one path toward customization, but most remain limited--targeting co-design with engineers or requiring programming expertise. Non-professionals with disabilities, including PVI, also face barriers such as inaccessible tools, lack of confidence, and insufficient technical knowledge. These gaps highlight the need for prototyping technologies that enable PVI to directly make their own AT. Building on emerging evidence that large language models (LLMs) can serve not only as visual aids but also as co-design partners, we present an exploratory study of how LLM-based AI can support PVI in the tangible DIY-AT co-making process. Our findings surface key challenges and design opportunities: the need for greater spatial and visual support, strategies for mitigating novel AI errors, and implications for designing more accessible AI-assisted prototypes.
翻译:现有辅助技术(AT)通常采用“一刀切”的方法,忽视了视障人士(PVI)的多样化需求。自助式辅助技术(DIY-AT)工具包为实现定制化提供了一条路径,但大多数仍存在局限——要么仅面向与工程师的协同设计,要么要求具备编程专业知识。包括视障人士在内的非专业残障人士还面临工具不可访问、缺乏信心和技术知识不足等障碍。这些差距凸显了对原型技术的需求,以使视障人士能够直接制作自己的辅助技术。基于大型语言模型(LLMs)不仅能作为视觉辅助工具,还能作为协同设计伙伴的新兴证据,我们开展了一项探索性研究,探讨基于LLM的人工智能如何在实体DIY-AT共同制作过程中支持视障人士。我们的研究揭示了关键挑战与设计机遇:对增强空间与视觉支持的需求、缓解新型人工智能错误的策略,以及对设计更易访问的人工智能辅助原型的影响。