With the impressive performance in various downstream tasks, large language models (LLMs) have been widely integrated into production pipelines, like recruitment and recommendation systems. A known issue of models trained on natural language data is the presence of human biases, which can impact the fairness of the system. This paper investigates LLMs' behavior with respect to gender stereotypes, in the context of occupation decision making. Our framework is designed to investigate and quantify the presence of gender stereotypes in LLMs' behavior via multi-round question answering. Inspired by prior works, we construct a dataset by leveraging a standard occupation classification knowledge base released by authoritative agencies. We tested three LLMs (RoBERTa-large, GPT-3.5-turbo, and Llama2-70b-chat) and found that all models exhibit gender stereotypes analogous to human biases, but with different preferences. The distinct preferences of GPT-3.5-turbo and Llama2-70b-chat may imply the current alignment methods are insufficient for debiasing and could introduce new biases contradicting the traditional gender stereotypes.
翻译:随着大型语言模型(LLM)在各种下游任务中展现出令人瞩目的性能,它们已被广泛集成到招聘和推荐系统等生产流程中。基于自然语言数据训练的模型存在的一个已知问题是人类偏见的嵌入,这可能影响系统的公平性。本文在职业决策的背景下,研究了LLM在性别刻板印象方面的行为。我们设计了一个框架,通过多轮问答来调查和量化LLM行为中性别刻板印象的存在。受先前工作的启发,我们利用权威机构发布的标准职业分类知识库构建了一个数据集。我们测试了三种LLM(RoBERTa-large、GPT-3.5-turbo和Llama2-70b-chat),发现所有模型都表现出类似于人类偏见的性别刻板印象,但具有不同的倾向性。GPT-3.5-turbo和Llama2-70b-chat的独特倾向可能意味着当前的对齐方法不足以消除偏见,甚至可能引入与传统性别刻板印象相矛盾的新偏见。