In this work we investigate the sociocultural values learned by large language models (LLMs). We introduce a novel open-access dataset, Sociocultural Statements, constructed from natural debate statements using a multi-step methodology. The dataset is synthetically labeled to enable the quantization of sociocultural norms and beliefs that LLMs exhibit in their responses to these statements, according to the Hofstede cultural dimensions. We verify the accuracy of synthetic labels using human quality control on a representative sample. We conduct a comparative analysis between two groups of LLMs developed in different countries (U.S. and China), and use as a comparative baseline patterns observed in human measurements. Using this new dataset and the analysis above, we found that culturally-distinct LLMs reflect the values and norms of the countries in which they were developed, highlighting their inability to adapt to the sociocultural backgrounds of their users.
翻译:本研究探讨了大语言模型(LLMs)所习得的社会文化价值观。我们引入了一个新颖的开源数据集——社会文化陈述集,该数据集采用多步骤方法从自然辩论陈述中构建而成。该数据集通过合成标注,使得能够依据霍夫斯泰德文化维度理论,对LLMs在回应这些陈述时所展现的社会文化规范与信念进行量化分析。我们通过对代表性样本进行人工质量检查,验证了合成标注的准确性。我们对在不同国家(美国和中国)开发的两组LLMs进行了比较分析,并以人类测量中观察到的模式作为比较基线。利用这一新数据集及上述分析,我们发现具有文化差异的LLMs反映了其开发所在国家的价值观与规范,这凸显了它们难以适应用户社会文化背景的局限性。