Large Language Model (LLM) chatbots like ChatGPT have emerged as cognitive scaffolding for autistic users, yet the tension between their utility and risk remains under-articulated. Through an inductive thematic analysis of 3,984 social media posts by self-identified autistic users, we apply the Technology Affordance framework to examine this duality. We found that while users leveraged ChatGPT to offload executive dysfunction, regulate emotions, translate neurotypical communication, and validate their autistic identity, these affordances coexist with significant risks: reinforcing delusional thinking, erasing authentic identity through automated masking, and triggering conflicts with the autistic sense of justice. This poster identifies these trade-offs in autistic users' interactions with ChatGPT and concludes by outlining our future work on developing neuro-inclusive technologies that address these tensions through beneficial friction and bidirectional translation.
翻译:大型语言模型(LLM)聊天机器人(如ChatGPT)已成为自闭症用户的认知支架,但其效用与风险之间的张力尚未得到充分阐释。通过对3,984条自我认同为自闭症用户的社交媒体帖子进行归纳式主题分析,我们运用技术可供性框架来审视这种双重性。研究发现,用户虽借助ChatGPT来缓解执行功能障碍、调节情绪、翻译神经典型性沟通方式并验证其自闭症身份,但这些可供性同时伴随着显著风险:强化妄想思维、通过自动化伪装消解真实身份,以及引发与自闭症正义感的冲突。本海报揭示了自闭症用户与ChatGPT互动中的这些权衡关系,最后通过规划未来研究方向作结——我们将致力于开发神经包容性技术,通过有益摩擦与双向翻译机制来化解这些张力。