Large Language Models (LLMs) like ChatGPT offer potential support for autistic people, but this potential requires understanding the implicit perspectives these models might carry, including their biases and assumptions about autism. Moving beyond single-agent prompting, we utilized LLM-based multi-agent systems to investigate complex social scenarios involving autistic and non-autistic agents. In our study, agents engaged in group-task conversations and answered structured interview questions, which we analyzed to examine ChatGPT's biases and how it conceptualizes autism. We found that ChatGPT assumes autistic people are socially dependent, which may affect how it interacts with autistic users or conveys information about autism. To address these challenges, we propose adopting the double empathy problem, which reframes communication breakdowns as a mutual challenge. We describe how future LLMs could address the biases we observed and improve interactions involving autistic people by incorporating the double empathy problem into their design.
翻译:以ChatGPT为代表的大型语言模型(LLMs)为自闭症群体提供了潜在的支持,但这种潜力的实现需要理解这些模型可能携带的隐含观点,包括其对自闭症的偏见和假设。本研究超越单智能体提示范式,采用基于LLM的多智能体系统来模拟涉及自闭症与非自闭症智能体的复杂社交场景。在我们的实验中,智能体参与群体任务对话并回答结构化访谈问题,我们通过分析这些交互数据来考察ChatGPT的偏见及其对自闭症的概念化方式。研究发现,ChatGPT默认自闭症群体具有社交依赖性,这种假设可能影响其与自闭症用户的交互方式及自闭症相关信息的传递。为应对这些挑战,我们提出采用双重共情问题框架,将沟通障碍重新定义为双向挑战。本文进一步阐述了未来LLMs如何通过将双重共情问题纳入模型设计,以解决已观察到的偏见并改善涉及自闭症群体的交互体验。