Recent advancements in large language models (LLMs) have greatly enhanced natural language processing (NLP) applications. Nevertheless, these models often inherit biases from their training data. Despite the availability of various datasets for bias detection, most are limited to one or two NLP tasks (typically classification or evaluation) and lack comprehensive evaluations across a broader range of NLP tasks. To address this gap, we introduce the Bias Evaluations Across Domains BEADs dataset, designed to support a wide array of NLP tasks, including text classification, token classification, bias quantification, and benign language generation. A key focus of this paper is the gold label dataset that is annotated by GPT4 for scalabilty and verified by experts to ensure high reliability. BEADs provides data for both fine-tuning, including classification and language generation tasks, and for evaluating LLMs. Our findings indicate that BEADs effectively identifies numerous biases when fine-tuned on this dataset. It also reduces biases when used for fine-tuning language generation task, while preserving language quality. The results also reveal some prevalent demographic biases in LLMs when BEADs is used for evaluation in demographic task. We provide the BEADs dataset for detecting biases in various domains, and this dataset is readily usable for responsible AI development and application. The dataset can be accessed at https://huggingface.co/datasets/shainar/BEAD .
翻译:近年来,大型语言模型(LLMs)的进展极大地推动了自然语言处理(NLP)应用的发展。然而,这些模型常常从其训练数据中继承偏见。尽管已有多种用于偏见检测的数据集,但大多局限于一项或两项NLP任务(通常是分类或评估),缺乏跨更广泛NLP任务的全面评估。为填补这一空白,我们引入了跨领域偏见评估数据集BEADs,其设计旨在支持广泛的NLP任务,包括文本分类、标记分类、偏见量化以及良性语言生成。本文的一个核心重点是采用GPT4进行标注以实现可扩展性,并由专家验证以确保高可靠性的黄金标准标签数据集。BEADs为微调(包括分类和语言生成任务)以及评估LLMs提供了数据。我们的研究结果表明,当在此数据集上进行微调时,BEADs能有效识别大量偏见。当用于微调语言生成任务时,它能在保持语言质量的同时减少偏见。结果还揭示了当BEADs用于人口统计任务评估时,LLMs中一些普遍存在的人口统计偏见。我们提供BEADs数据集用于检测各领域的偏见,该数据集可直接用于负责任的人工智能开发与应用。数据集可通过 https://huggingface.co/datasets/shainar/BEAD 访问。