Ensuring privacy-preserving inference on cryptographically secure data is a well-known computational challenge. To alleviate the bottleneck of costly cryptographic computations in non-linear activations, recent methods have suggested linearizing a targeted portion of these activations in neural networks. This technique results in significantly reduced runtimes with often negligible impacts on accuracy. In this paper, we demonstrate that such computational benefits may lead to increased fairness costs. Specifically, we find that reducing the number of ReLU activations disproportionately decreases the accuracy for minority groups compared to majority groups. To explain these observations, we provide a mathematical interpretation under restricted assumptions about the nature of the decision boundary, while also showing the prevalence of this problem across widely used datasets and architectures. Finally, we show how a simple procedure altering the fine-tuning step for linearized models can serve as an effective mitigation strategy.
翻译:确保在密码学安全数据上进行隐私保护推理是一个众所周知的计算挑战。为了缓解非线性激活函数中昂贵的密码学计算瓶颈,近期的方法建议对神经网络中目标部分激活函数进行线性化。该技术能显著减少运行时间,且通常对准确性的影响可忽略不计。本文证明,这种计算优势可能导致公平性代价的增加。具体而言,我们发现减少ReLU激活函数的数量会不成比例地降低少数群体相较于多数群体的准确性。为解释这些现象,我们在关于决策边界性质的受限假设下提供了数学解释,同时展示了该问题在广泛使用的数据集和架构中的普遍性。最后,我们展示了一种简单的、修改线性化模型微调步骤的程序如何能作为一种有效的缓解策略。