Augmentative and Alternative Communication (AAC) technologies are categorized into two forms: aided AAC, which uses external devices like speech-generating systems to produce standardized output, and unaided AAC, which relies on body-based gestures for natural expression but requires shared understanding. We investigate how to combine these approaches to harness the speed and naturalness of unaided AAC while maintaining the intelligibility of aided AAC, a largely unexplored area for individuals with communication and motor impairments. Through 18 months of participatory design with AAC users, we identified key challenges and opportunities and developed AllyAAC, a wearable system with a wrist-worn IMU paired with a smartphone app. We evaluated AllyAAC in a field study with 14 participants and produced a dataset containing over 600,000 multimodal data points featuring atypical gestures--the first of its kind. Our findings reveal challenges in recognizing personalized, idiosyncratic gestures and demonstrate how to address them using Transformer-based large machine learning (ML) models with different pretraining strategies. In sum, we contribute design principles and a reference implementation for adaptive, personalized systems combining aided and unaided AAC.
翻译:增强与替代性交流(AAC)技术可分为两种形式:辅助性AAC使用语音生成系统等外部设备产生标准化输出;无辅助AAC则依赖基于身体的手势进行自然表达,但需要交流双方具备共同理解基础。本研究探讨如何结合这两种方法,在保持辅助性AAC可理解性的同时,发挥无辅助AAC的快速性与自然性——这对存在交流与运动障碍的个体而言仍属亟待探索的领域。通过与AAC用户进行为期18个月的参与式设计,我们识别出关键挑战与机遇,并开发了AllyAAC系统。该系统包含腕戴式惯性测量单元及配套智能手机应用,在涉及14名参与者的实地研究中完成评估,并创建了包含超过60万个多模态数据点的非典型手势数据集——此为首个该类型数据集。研究结果揭示了识别个性化特异手势面临的挑战,并展示了如何通过采用不同预训练策略的基于Transformer的大型机器学习(ML)模型应对这些挑战。总体而言,本研究为结合辅助性与无辅助性AAC的自适应个性化系统贡献了设计原则与参考实施方案。