The debate around bias in AI systems is central to discussions on algorithmic fairness. However, the term bias often lacks a clear definition, despite frequently being contrasted with fairness, implying that an unbiased model is inherently fair. In this paper, we challenge this assumption and argue that a precise conceptualization of bias is necessary to effectively address fairness concerns. Rather than viewing bias as inherently negative or unfair, we highlight the importance of distinguishing between bias and discrimination. We further explore how this shift in focus can foster a more constructive discourse within academic debates on fairness in AI systems.
翻译:围绕AI系统偏见的争论是算法公平性讨论的核心议题。然而,"偏见"这一术语常缺乏明确定义,尽管其常与公平性相对立,暗示无偏模型本质上是公平的。本文挑战这一假设,主张必须建立精确的偏见概念化框架才能有效解决公平性问题。我们强调区分偏见与歧视的重要性,而非将偏见视为本质消极或不公平的现象。进一步探讨这种焦点转变如何能在AI系统公平性的学术讨论中促成更具建设性的对话。