Recent progress in large language models (LLMs) has led to their widespread adoption in various domains. However, these advancements have also introduced additional safety risks and raised concerns regarding their detrimental impact on already marginalized populations. Despite growing mitigation efforts to develop safety safeguards, such as supervised safety-oriented fine-tuning and leveraging safe reinforcement learning from human feedback, multiple concerns regarding the safety and ingrained biases in these models remain. Furthermore, previous work has demonstrated that models optimized for safety often display exaggerated safety behaviors, such as a tendency to refrain from responding to certain requests as a precautionary measure. As such, a clear trade-off between the helpfulness and safety of these models has been documented in the literature. In this paper, we further investigate the effectiveness of safety measures by evaluating models on already mitigated biases. Using the case of Llama 2 as an example, we illustrate how LLMs' safety responses can still encode harmful assumptions. To do so, we create a set of non-toxic prompts, which we then use to evaluate Llama models. Through our new taxonomy of LLMs responses to users, we observe that the safety/helpfulness trade-offs are more pronounced for certain demographic groups which can lead to quality-of-service harms for marginalized populations.
翻译:大语言模型(LLMs)的最新进展已推动其在多个领域的广泛应用。然而,这些进步也带来了额外的安全风险,并引发了对已边缘化群体可能遭受不利影响的担忧。尽管针对安全防护的缓解措施日益增多(例如基于监督的安全导向微调及利用人类反馈的安全强化学习),这些模型在安全性与内嵌偏见方面仍存在多重隐忧。此外,先前研究表明,为安全优化后的模型常表现出过度安全行为,例如为规避风险倾向于拒绝响应某些请求。文献中已清晰记载这类模型在有用性与安全性之间的权衡取舍。本文进一步探究安全措施的有效性,聚焦于已缓解偏见的模型评估。以 Llama 2 为例,我们揭示 LLMs 的安全响应仍可能编码有害假设。为此,我们构建了一组无毒提示词,并用于评估 Llama 模型。通过提出新的 LLMs 用户响应分类体系,我们观察到安全性与有用性的权衡在某些人口统计群体中更为显著,这可能导致对边缘化群体的服务质量危害。