The widespread adoption of large language models (LLMs) across various regions underscores the urgent need to evaluate their alignment with human values. Current benchmarks, however, fall short of effectively uncovering safety vulnerabilities in LLMs. Despite numerous models achieving high scores and 'topping the chart' in these evaluations, there is still a significant gap in LLMs' deeper alignment with human values and achieving genuine harmlessness. To this end, this paper proposes a value alignment benchmark named Flames, which encompasses both common harmlessness principles and a unique morality dimension that integrates specific Chinese values such as harmony. Accordingly, we carefully design adversarial prompts that incorporate complex scenarios and jailbreaking methods, mostly with implicit malice. By prompting 17 mainstream LLMs, we obtain model responses and rigorously annotate them for detailed evaluation. Our findings indicate that all the evaluated LLMs demonstrate relatively poor performance on Flames, particularly in the safety and fairness dimensions. We also develop a lightweight specified scorer capable of scoring LLMs across multiple dimensions to efficiently evaluate new models on the benchmark. The complexity of Flames has far exceeded existing benchmarks, setting a new challenge for contemporary LLMs and highlighting the need for further alignment of LLMs. Our benchmark is publicly available at https://github.com/AIFlames/Flames.
翻译:大型语言模型(LLMs)在各地区的广泛应用凸显了评估其与人类价值观对齐的迫切需求。然而,现有基准测试未能有效揭示LLMs的安全漏洞。尽管众多模型在这些评估中获得高分并“登顶排行榜”,但它们在深层价值观对齐与实现真正无害化之间仍存在显著差距。为此,本文提出一个名为Flames的价值观对齐基准,该基准既包含通用的无害原则,又整合了特定中文价值观(如“和谐”)的独有道德维度。相应地,我们精心设计了包含复杂场景与越狱方法(多为隐性恶意)的对抗性提示词。通过对17个主流LLMs进行提示,我们获取模型响应并严格标注以进行细致评估。研究结果表明,所有被评估的LLMs在Flames上表现均相对较差,尤其在安全性与公平性维度。我们还开发了一个轻量级专用评分器,可跨多维度对LLMs进行评分,从而在基准测试中高效评估新模型。Flames的复杂度远超现有基准,为当代LLMs设立了新挑战,并凸显了LLMs进一步对齐的必要性。我们的基准测试已在https://github.com/AIFlames/Flames公开。