Large language models (LLMs) are being increasingly integrated into everyday products and services, such as coding tools and writing assistants. As these embedded AI applications are deployed globally, there is a growing concern that the AI models underlying these applications prioritize Western values. This paper investigates what happens when a Western-centric AI model provides writing suggestions to users from a different cultural background. We conducted a cross-cultural controlled experiment with 118 participants from India and the United States who completed culturally grounded writing tasks with and without AI suggestions. Our analysis reveals that AI provided greater efficiency gains for Americans compared to Indians. Moreover, AI suggestions led Indian participants to adopt Western writing styles, altering not just what is written but also how it is written. These findings show that Western-centric AI models homogenize writing toward Western norms, diminishing nuances that differentiate cultural expression.
翻译:大型语言模型(LLMs)正日益融入日常产品与服务,例如编程工具和写作助手。随着这些嵌入式人工智能应用在全球范围内部署,人们越来越担忧支撑这些应用的人工智能模型会优先考虑西方价值观。本文研究了当以西方为中心的人工智能模型向来自不同文化背景的用户提供写作建议时会发生什么。我们进行了一项跨文化对照实验,招募了来自印度和美国的118名参与者,他们在有/无AI建议的情况下完成了基于文化的写作任务。分析表明,与印度参与者相比,AI为美国参与者带来了更高的效率提升。此外,AI建议导致印度参与者采用西方写作风格,不仅改变了写作内容,也改变了写作方式。这些发现表明,以西方为中心的人工智能模型使写作同质化趋向西方规范,削弱了区分文化表达的细微差异。