Multilingual large language models have gained prominence for their proficiency in processing and generating text across languages. Like their monolingual counterparts, multilingual models are likely to pick up on stereotypes and other social biases present in their training data. In this paper, we study a phenomenon we term stereotype leakage, which refers to how training a model multilingually may lead to stereotypes expressed in one language showing up in the models' behaviour in another. We propose a measurement framework for stereotype leakage and investigate its effect across English, Russian, Chinese, and Hindi and with GPT-3.5, mT5, and mBERT. Our findings show a noticeable leakage of positive, negative, and non-polar associations across all languages. We find that of these models, GPT-3.5 exhibits the most stereotype leakage, and Hindi is the most susceptible to leakage effects. WARNING: This paper contains model outputs which could be offensive in nature.
翻译:多语言大语言模型因其在跨语言文本处理与生成方面的卓越能力而备受关注。与单语言模型类似,多语言模型很可能从其训练数据中习得刻板印象及其他社会偏见。本文研究了一种我们称之为"刻板印象泄露"的现象,即模型的跨语言训练可能导致以某种语言表达的刻板印象出现在模型以另一种语言呈现的行为中。我们提出了一个刻板印象泄露的测量框架,并在英语、俄语、中文和印地语中,针对GPT-3.5、mT5和mBERT模型进行了研究。研究结果表明,在所有语言中均存在明显的积极、消极及中性关联的泄露。我们发现,在这些模型中,GPT-3.5表现出最多的刻板印象泄露,而印地语是最易受泄露效应影响的语言。警告:本文包含可能具有冒犯性的模型输出内容。