Large language models (LLMs) are widely deployed for open-ended communication, yet most bias evaluations still rely on English, classification-style tasks. We introduce \corpusname, a new multilingual, debate-style benchmark designed to reveal how narrative bias appears in realistic generative settings. Our dataset includes 8{,}400 structured debate prompts spanning four sensitive domains -- Women's Rights, Backwardness, Terrorism, and Religion -- across seven languages ranging from high-resource (English, Chinese) to low-resource (Swahili, Nigerian Pidgin). Using four flagship models (GPT-4o, Claude~3.5~Haiku, DeepSeek-Chat, and LLaMA-3-70B), we generate over 100{,}000 debate responses and automatically classify which demographic groups are assigned stereotyped versus modern roles. Results show that all models reproduce entrenched stereotypes despite safety alignment: Arabs are overwhelmingly linked to Terrorism and Religion ($\geq$89\%), Africans to socioeconomic ``backwardness'' (up to 77\%), and Western groups are consistently framed as modern or progressive. Biases grow sharply in lower-resource languages, revealing that alignment trained primarily in English does not generalize globally. Our findings highlight a persistent divide in multilingual fairness: current alignment methods reduce explicit toxicity but fail to prevent biased outputs in open-ended contexts. We release our \corpusname benchmark and analysis framework to support the next generation of multilingual bias evaluation and safer, culturally inclusive model alignment.
翻译:大型语言模型(LLMs)已被广泛部署于开放式对话场景,然而大多数偏见评估仍依赖于英语分类式任务。本文提出 \corpusname,一个旨在揭示叙事偏见如何在现实生成场景中显现的新型多语言辩论式基准。我们的数据集包含 8,400 个结构化辩论提示,涵盖四个敏感领域——女性权利、落后性、恐怖主义与宗教——涉及从高资源(英语、中文)到低资源(斯瓦希里语、尼日利亚皮钦语)的七种语言。利用四个旗舰模型(GPT-4o、Claude~3.5~Haiku、DeepSeek-Chat 和 LLaMA-3-70B),我们生成了超过 100,000 条辩论回应,并自动分类了哪些人口群体被分配了刻板印象角色与现代角色。结果显示,尽管进行了安全对齐,所有模型仍再现了根深蒂固的刻板印象:阿拉伯人与恐怖主义和宗教的关联度极高($\geq$89%),非洲人与社会经济“落后性”的关联度高达 77%,而西方群体则始终被塑造为现代或进步的代表。在低资源语言中,偏见显著加剧,这表明主要在英语数据上训练的对齐方法无法实现全球化泛化。我们的研究结果突显了多语言公平性中持续存在的鸿沟:当前的对齐方法虽能减少显性有害内容,却无法在开放式语境中防止偏见输出。我们公开发布 \corpusname 基准与分析框架,以支持下一代多语言偏见评估及更安全、更具文化包容性的模型对齐研究。