Ableist language perpetuates harmful stereotypes and exclusion, yet its nuanced nature makes it difficult to recognize and address. Artificial intelligence could serve as a powerful ally in the fight against ableist language, offering tools that detect and suggest alternatives to biased terms. This two-part study investigates the potential of large language models (LLMs), specifically ChatGPT, to rectify ableist language and educate users about inclusive communication. We compared GPT-4o generations with crowdsourced annotations from trained disability community members, then invited disabled participants to evaluate both. Participants reported equal agreement with human and AI annotations but significantly preferred the AI, citing its narrative consistency and accessible style. At the same time, they valued the emotional depth and cultural grounding of human annotations. These findings highlight the promise and limits of LLMs in handling culturally sensitive content. Our contributions include a dataset of nuanced ableism annotations and design considerations for inclusive writing tools.
翻译:残障歧视性语言助长有害的刻板印象与排斥现象,但其微妙特性使其难以被识别与处理。人工智能可作为对抗残障歧视性语言的重要辅助工具,提供检测偏见性术语并推荐替代方案的技术手段。这项包含两部分的研究探讨了大型语言模型(特别是ChatGPT)在纠正残障歧视性语言和指导用户进行包容性沟通方面的潜力。我们将GPT-4o的生成结果与经过培训的残障社群成员提供的众包标注进行对比,随后邀请残障参与者对两者进行评估。参与者报告显示对人工标注与AI标注的认可度相当,但显著更倾向于AI标注,主要肯定其叙述连贯性与可理解性风格。与此同时,参与者亦重视人工标注的情感深度与文化根基。这些发现揭示了大型语言模型在处理文化敏感性内容方面的潜力与局限。本研究的贡献包括构建了精细化的残障歧视标注数据集,并提出了包容性写作工具的设计考量。