The rapid deployment of AI systems in high-stakes domains, including those classified as high-risk under the The EU AI Act (Regulation (EU) 2024/1689), has intensified the need for reliable compliance auditing. For binary classifiers, regulatory risk assessment often relies on global fairness metrics such as the Disparate Impact ratio, widely used to evaluate potential discrimination. In typical auditing settings, the auditee provides a subset of its dataset to an auditor, while a supervisory authority may verify whether this subset is representative of the full underlying distribution. In this work, we investigate to what extent a malicious auditee can construct a fairness-compliant yet representative-looking sample from a non-compliant original distribution, thereby creating an illusion of fairness. We formalize this problem as a constrained distributional projection task and introduce mathematically grounded manipulation strategies based on entropic and optimal transport projections. These constructions characterize the minimal distributional shift required to satisfy fairness constraints. To counter such attacks, we formalize representativeness through distributional distance based statistical tests and systematically evaluate their ability to detect manipulated samples. Our analysis highlights the conditions under which fairness manipulation can remain statistically undetected and provides practical guidelines for strengthening supervisory verification. We validate our theoretical findings through experiments on standard tabular datasets for bias detection. Code is publicly available at https://github.com/ValentinLafargue/Inspection.
翻译:人工智能系统在关键领域的快速部署(包括被《欧盟人工智能法案》(法规(EU) 2024/1689)归类为高风险的应用)加强了对可靠合规审计的需求。对于二元分类器,监管风险评估通常依赖于全局公平性指标,如差异影响比,该指标被广泛用于评估潜在歧视。在典型审计场景中,被审计方向审计员提供其数据集的一个子集,而监管机构可能验证该子集是否代表了完整的底层分布。在本研究中,我们探究恶意被审计方能在多大程度上从非合规的原始分布中构建出看似具有代表性且符合公平性要求的样本,从而制造公平性假象。我们将此问题形式化为一个约束性分布投影任务,并引入基于熵投影和最优传输投影的数学化操纵策略。这些构建方法刻画了满足公平性约束所需的最小分布偏移。为应对此类攻击,我们通过基于分布距离的统计检验来形式化代表性概念,并系统评估其检测被操纵样本的能力。我们的分析揭示了公平性操纵在何种条件下可能保持统计上的不可检测性,并为加强监管验证提供了实用指南。我们在标准表格化偏差检测数据集上通过实验验证了理论发现。代码公开于 https://github.com/ValentinLafargue/Inspection。