The integration of large language models (LLMs) into various global cultures fundamentally presents a challenge: LLMs must navigate interactions, respect social norms, and avoid transgressing cultural boundaries. However, it is still unclear if LLMs can adapt their outputs to diverse cultural norms. Our study focuses on this aspect. We introduce NormAd, a novel dataset, which includes 2.6k stories that represent social and cultural norms from 75 countries, to assess the ability of LLMs to adapt to different granular levels of socio-cultural contexts such as the country of origin, its associated cultural values, and prevalent social norms. Our study reveals that LLMs struggle with cultural reasoning across all contextual granularities, showing stronger adaptability to English-centric cultures over those from the Global South. Even with explicit social norms, the top-performing model, Mistral-7b-Instruct, achieves only 81.8% accuracy, lagging behind the 95.6% achieved by humans. Evaluation on NormAd further reveals that LLMs struggle to adapt to stories involving gift-giving across cultures. Due to inherent agreement or sycophancy biases, LLMs find it considerably easier to assess the social acceptability of stories that adhere to norms than those that deviate. Our benchmark measures the cultural adaptability (or lack thereof) of LLMs, emphasizing the potential to make these technologies more equitable and useful for global audiences. We release the NormAd dataset and its associated code on GitHub.
翻译:将大语言模型(LLMs)整合到全球多元文化中面临一个根本性挑战:LLMs必须在交互中把握分寸、尊重社会规范并避免跨越文化边界。然而,目前尚不清楚LLMs能否使其输出适应不同的文化规范。本研究聚焦于此问题。我们提出了NormAd——一个包含2600个代表75个国家社会文化规范故事的新型数据集,用于评估LLMs在不同粒度社会文化语境(包括原籍国、相关文化价值观及主流社会规范)中的适应能力。研究发现,LLMs在所有语境粒度上都存在文化推理困难,对英语中心文化的适应性强于全球南方文化。即使在提供明确社会规范的情况下,表现最佳的Mistral-7b-Instruct模型也仅达到81.8%的准确率,远低于人类95.6%的水平。在NormAd上的评估进一步表明,LLMs难以适应涉及跨文化赠礼场景的故事。由于固有的认同偏差或谄媚倾向,LLMs评估符合规范故事的社会可接受性时,远比评估偏离规范的故事更为容易。本基准通过衡量LLMs的文化适应性(或其缺失),强调了使这些技术更公平、更有效地服务于全球用户的潜力。我们已在GitHub上开源NormAd数据集及相关代码。