Ensuring privacy-preserving inference on cryptographically secure data is a well-known computational challenge. To alleviate the bottleneck of costly cryptographic computations in non-linear activations, recent methods have suggested linearizing a targeted portion of these activations in neural networks. This technique results in significantly reduced runtimes with often negligible impacts on accuracy. In this paper, we demonstrate that such computational benefits may lead to increased fairness costs. Specifically, we find that reducing the number of ReLU activations disproportionately decreases the accuracy for minority groups compared to majority groups. To explain these observations, we provide a mathematical interpretation under restricted assumptions about the nature of the decision boundary, while also showing the prevalence of this problem across widely used datasets and architectures. Finally, we show how a simple procedure altering the fine-tuning step for linearized models can serve as an effective mitigation strategy.
翻译:在加密安全数据上实现隐私保护推理是一个众所周知的计算难题。为缓解非线性激活函数中昂贵密码学计算造成的瓶颈,近期方法建议对神经网络中特定比例的非线性激活函数进行线性化处理。该技术能显著缩短运行时间,且通常对准确率影响极小。本文证明,此类计算优势可能导致公平性成本增加。具体而言,我们发现减少ReLU激活函数的数量会不成比例地降低少数群体相对于多数群体的准确率。为解释这一现象,我们在决策边界性质的受限假设下提出数学解释,同时展示该问题在广泛使用的数据集和架构中的普遍性。最后,我们展示如何通过一个简单的微调步骤修改流程来作为线性化模型的有效缓解策略。