Large language models (LLMs) are being increasingly integrated into everyday products and services, such as coding tools and writing assistants. As these embedded AI applications are deployed globally, there is a growing concern that the AI models underlying these applications prioritize Western values. This paper investigates what happens when a Western-centric AI model provides writing suggestions to users from a different cultural background. We conducted a cross-cultural controlled experiment with 118 participants from India and the United States who completed culturally grounded writing tasks with and without AI suggestions. Our analysis reveals that AI provided greater efficiency gains for Americans compared to Indians. Moreover, AI suggestions led Indian participants to adopt Western writing styles, altering not just what is written but also how it is written. These findings show that Western-centric AI models homogenize writing toward Western norms, diminishing nuances that differentiate cultural expression.
翻译:大型语言模型正日益融入日常产品与服务,如编程工具和写作助手。随着这些嵌入式AI应用在全球部署,人们越来越担心支撑这些应用的AI模型会优先考虑西方价值观。本文研究了以西方为中心的AI模型向不同文化背景用户提供写作建议时产生的影响。我们开展了一项跨文化对照实验,招募118名来自印度和美国的参与者,让他们在使用与不使用AI建议的情况下完成基于文化的写作任务。分析表明,与印度参与者相比,AI为美国参与者带来了更高的效率提升。此外,AI建议导致印度参与者采用西方写作风格,不仅改变了写作内容,也改变了表达方式。这些发现表明,以西方为中心的AI模型使写作风格向西方规范趋同,削弱了区分文化表达的细微差异。