The complexity of neural networks and inference tasks, coupled with demands for computational efficiency and real-time feedback, poses significant challenges for resource-constrained edge devices. Collaborative inference mitigates this by assigning shallow feature extraction to edge devices and offloading features to the cloud for further inference, reducing computational load. However, transmitted features remain susceptible to model inversion attacks (MIAs), which can reconstruct original input data. Current defenses, such as perturbation and information bottleneck techniques, offer explainable protection but face limitations, including the lack of standardized criteria for assessing MIA difficulty, challenges in mutual information estimation, and trade-offs among usability, privacy, and deployability. To address these challenges, we introduce the first criterion to evaluate MIA difficulty in collaborative inference, supported by theoretical analysis of existing attacks and defenses, validated using experiments with the Mutual Information Neural Estimator (MINE). Based on these findings, we propose SiftFunnel, a privacy-preserving framework for collaborative inference. The edge model is trained with linear and non-linear correlation constraints to reduce redundant information in transmitted features, enhancing privacy protection. Label smoothing and a cloud-based upsampling module are added to balance usability and privacy. To improve deployability, the edge model incorporates a funnel-shaped structure and attention mechanisms, preserving both privacy and usability. Extensive experiments demonstrate that SiftFunnel outperforms state-of-the-art defenses against MIAs, achieving superior privacy protection with less than 3% accuracy loss and striking an optimal balance among usability, privacy, and practicality.
翻译:神经网络与推理任务的复杂性,加之对计算效率与实时反馈的需求,为资源受限的边缘设备带来了严峻挑战。协同推理通过将浅层特征提取任务分配给边缘设备,并将特征卸载至云端进行后续推理,有效减轻了计算负担。然而,传输的特征仍易受到模型反演攻击的威胁,此类攻击能够重构原始输入数据。现有防御方法(如扰动技术与信息瓶颈技术)虽能提供可解释的保护,但仍面临诸多局限:缺乏评估MIA难度的标准化准则、互信息估计存在挑战,以及在可用性、隐私性与可部署性之间难以权衡。为应对这些挑战,我们首次提出了评估协同推理中MIA难度的准则,该准则基于对现有攻击与防御方法的理论分析,并利用互信息神经估计器通过实验验证。基于上述发现,我们提出了SiftFunnel——一种面向隐私保护的协同推理框架。边缘模型通过线性与非线性相关性约束进行训练,以减少传输特征中的冗余信息,从而增强隐私保护。同时引入标签平滑技术与基于云端的上采样模块,以平衡可用性与隐私性。为提升可部署性,边缘模型采用漏斗形结构与注意力机制,在保护隐私的同时维持可用性。大量实验表明,SiftFunnel在防御MIA方面优于现有先进方法,在精度损失低于3%的前提下实现了更优的隐私保护,并在可用性、隐私性与实用性之间达到了最佳平衡。