Safety-aligned large language models (LLMs) are becoming increasingly widespread, especially in sensitive applications where fairness is essential and biased outputs can cause significant harm. However, evaluating the fairness of models is a complex challenge, and approaches that do so typically utilize standard question-answer (QA) styled schemes. Such methods often overlook deeper issues by interpreting the model's refusal responses as positive fairness measurements, which creates a false sense of fairness. In this work, we introduce the concept of silenced biases, which are unfair preferences encoded within models' latent space and are effectively concealed by safety-alignment. Previous approaches that considered similar indirect biases often relied on prompt manipulation or handcrafted implicit queries, which present limited scalability and risk contaminating the evaluation process with additional biases. We propose the Silenced Bias Benchmark (SBB), which aims to uncover these biases by employing activation steering to reduce model refusals during QA. SBB supports easy expansion to new demographic groups and subjects, presenting a fairness evaluation framework that encourages the future development of fair models and tools beyond the masking effects of alignment training. We demonstrate our approach over multiple LLMs, where our findings expose an alarming distinction between models' direct responses and their underlying fairness issues.
翻译:安全对齐的大语言模型正日益普及,尤其在公平性至关重要且偏见输出可能造成重大损害的敏感应用中。然而,评估模型的公平性是一项复杂挑战,现有评估方法通常采用标准问答范式。这类方法常将模型的拒绝回应解读为积极的公平性指标,从而忽视了更深层的问题,并营造出虚假的公平表象。本研究提出"被掩盖的偏见"这一概念,指代编码于模型潜在空间中的不公平偏好,这些偏见被安全对齐机制有效隐藏。先前涉及类似间接偏见的研究多依赖提示词操纵或人工构建的隐式查询,这些方法可扩展性有限,且可能因引入额外偏见而污染评估过程。我们提出被掩盖的偏见基准测试,通过采用激活导向技术来减少模型在问答过程中的拒绝行为,旨在揭示这些隐藏的偏见。该基准支持轻松扩展至新的人口群体和主题领域,构建了一个公平性评估框架,有助于推动未来开发超越对齐训练掩盖效应的公平模型与工具。我们在多个大语言模型上验证了该方法,研究结果揭示了模型直接回应与其深层公平性问题之间令人警醒的差异。