Now that AI-driven moderation has become pervasive in everyday life, we often hear claims that "the AI is biased". While this is often said jokingly, the light-hearted remark reflects a deeper concern. How can we be certain that an online post flagged as "inappropriate" was not simply the victim of a biased algorithm? This paper investigates this problem using a dual approach. First, I conduct a quantitative benchmark of a widely used toxicity model (unitary/toxic-bert) to measure performance disparity between text in African-American English (AAE) and Standard American English (SAE). The benchmark reveals a clear, systematic bias: on average, the model scores AAE text as 1.8 times more toxic and 8.8 times higher for "identity hate". Second, I introduce an interactive pedagogical tool that makes these abstract biases tangible. The tool's core mechanic, a user-controlled "sensitivity threshold," demonstrates that the biased score itself is not the only harm; instead, the more-concerning harm is the human-set, seemingly neutral policy that ultimately operationalises discrimination. This work provides both statistical evidence of disparate impact and a public-facing tool designed to foster critical AI literacy.
翻译:摘要:如今,AI驱动的审核已渗透日常生活,我们常听到“这个AI有偏见”的说法。尽管这常被戏谑表述,但轻率评论背后反映着深层忧虑:被标记为“不当”的网络帖子,如何能确定它只是有偏算法的牺牲品?本文采用双重方法研究该问题。首先,我对广泛使用的毒性模型(unitary/toxic-bert)进行定量基准测试,测量其在非洲裔美国人英语(AAE)与标准美国英语(SAE)文本间的性能差异。基准测试揭示了清晰、系统性的偏见:平均而言,该模型对AAE文本的毒性评分是SAE文本的1.8倍,“身份仇恨”评分高达8.8倍。其次,我引入一个交互式教学工具,使这些抽象偏见变得可感知。该工具的核心机制——用户可控的“敏感度阈值”——表明有偏评分本身并非唯一危害;真正更令人担忧的,是最终将歧视操作化的人类设定的、看似中立的政策。本研究既提供了差异性影响的统计证据,又打造了面向公众的批判性AI素养培养工具。