Investigating bias in large language models (LLMs) is crucial for developing trustworthy AI. While prompt-based through prompt engineering is common, its effectiveness relies on the assumption that models inherently understand biases. Our study systematically analyzed this assumption using the BBQ and StereoSet benchmarks on both open-source models as well as commercial GPT model. Experimental results indicate that prompt-based is often superficial; for instance, the Llama2-7B-Chat model misclassified over 90% of unbiased content as biased, despite achieving high accuracy in identifying bias issues on the BBQ dataset. Additionally, specific evaluation and question settings in bias benchmarks often lead LLMs to choose "evasive answers", disregarding the core of the question and the relevance of the response to the context. Moreover, the apparent success of previous methods may stem from flawed evaluation metrics. Our research highlights a potential "false prosperity" in prompt-base efforts and emphasizes the need to rethink bias metrics to ensure truly trustworthy AI.
翻译:研究大型语言模型(LLMs)中的偏见对于开发可信人工智能至关重要。尽管通过提示工程进行基于提示的去偏方法较为常见,但其有效性依赖于模型固有理解偏见这一假设。本研究使用BBQ和StereoSet基准测试,对开源模型及商用GPT模型系统分析了这一假设。实验结果表明,基于提示的方法往往流于表面;例如,Llama2-7B-Chat模型在BBQ数据集识别偏见问题准确率较高的同时,却将超过90%的无偏见内容误判为存在偏见。此外,偏见基准测试中特定的评估设置和问题设计常导致LLMs选择“回避性答案”,忽视问题核心及回答与语境的相关性。更重要的是,先前方法表面上的成功可能源于有缺陷的评估指标。本研究揭示了基于提示的去偏工作中潜在的“虚假繁荣”,并强调必须重新审视偏见评估体系,以确保实现真正可信的人工智能。