Large Language Model (LLM) chatbots like ChatGPT have emerged as cognitive scaffolding for autistic users, yet the tension between their utility and risk remains under-articulated. Through an inductive thematic analysis of 3,984 social media posts by self-identified autistic users, we apply a technology affordance lens to examine this duality. We found that while users leveraged ChatGPT to offload executive dysfunction, regulate emotions, translate neurotypical communication, and validate their autistic identity, these affordances coexist with risks to their well-being: reinforcing delusional thinking, erasing authentic identity through automated masking, and triggering conflicts with the autistic sense of justice. As part of our preliminary work, this poster identifies trade-offs in autistic users' interactions with ChatGPT and concludes by outlining our future work on developing neuro-inclusive technologies that address these tensions through beneficial friction, bidirectional translation, and the delineation of emotional validation from reality.
翻译:暂无翻译