The success of Large Language Models (LLMs) in multicultural environments hinges on their ability to understand users' diverse cultural backgrounds. We measure this capability by having an LLM simulate human profiles representing various nationalities within the scope of a questionnaire-style psychological experiment. Specifically, we employ GPT-3.5 to reproduce reactions to persuasive news articles of 7,286 participants from 15 countries; comparing the results with a dataset of real participants sharing the same demographic traits. Our analysis shows that specifying a person's country of residence improves GPT-3.5's alignment with their responses. In contrast, using native language prompting introduces shifts that significantly reduce overall alignment, with some languages particularly impairing performance. These findings suggest that while direct nationality information enhances the model's cultural adaptability, native language cues do not reliably improve simulation fidelity and can detract from the model's effectiveness.
翻译:大型语言模型(LLMs)在多文化环境中的成功取决于其理解用户多元文化背景的能力。本研究通过让LLM在问卷式心理学实验框架内模拟代表不同国籍的人类角色,来量化评估这一能力。具体而言,我们使用GPT-3.5复现了来自15个国家7,286名参与者对说服性新闻文章的反应,并将结果与具有相同人口统计学特征的真实参与者数据集进行对比。分析表明:明确指定人物的居住国能提升GPT-3.5与其反应的契合度;而使用母语提示则会产生偏移,显著降低整体契合度,某些语言尤其会削弱模型表现。这些发现说明,虽然直接提供国籍信息能增强模型的文化适应性,但母语提示并不能可靠提升模拟保真度,反而可能降低模型效能。