Federated Learning has been popularized in recent years for applications involving personal or sensitive data, as it allows the collaborative training of machine learning models through local updates at the data-owners' premises, which does not require the sharing of the data itself. Considering the risk of leakage or misuse by any of the data-owners, many works attempt to protect their copyright, or even trace the origin of a potential leak through unique watermarks identifying each participant's model copy. Realistic accusation scenarios impose a black-box setting, where watermarks are typically embedded as a set of sample-label pairs. The threat of collusion, however, where multiple bad actors conspire together to produce an untraceable model, has been rarely addressed, and previous works have been limited to shallow networks and near-linearly separable main tasks. To the best of our knowledge, this work is the first to present a general collusion-resistant embedding method for black-box traitor tracing in Federated Learning: BlackCATT, which introduces a novel collusion-aware embedding loss term and, instead of using a fixed trigger set, iteratively optimizes the triggers to aid convergence and traitor tracing performance. Experimental results confirm the efficacy of the proposed scheme across different architectures and datasets. Furthermore, for models that would otherwise suffer from update incompatibility on the main task after learning different watermarks (e.g., architectures including batch normalization layers), our proposed BlackCATT+FR incorporates functional regularization through a set of auxiliary examples at the aggregator, promoting a shared feature space among model copies without compromising traitor tracing performance.
翻译:联邦学习近年来在涉及个人或敏感数据的应用中日益普及,因为它允许通过在数据所有者本地进行模型更新来实现机器学习模型的协同训练,而无需共享数据本身。考虑到数据所有者可能存在的泄露或滥用风险,许多研究工作试图保护模型版权,甚至通过标识每个参与者模型副本的唯一水印来追踪潜在泄露的源头。现实的指控场景要求黑盒设置,其中水印通常被嵌入为一组样本-标签对。然而,共谋威胁——即多个恶意参与者合谋产生无法追踪的模型——却鲜有研究涉及,且先前工作仅限于浅层网络和近似线性可分的主任务。据我们所知,本文首次提出了一种适用于联邦学习中黑盒叛徒追踪的通用抗共谋嵌入方法:BlackCATT。该方法引入了一种新颖的共谋感知嵌入损失项,并且不使用固定的触发集,而是通过迭代优化触发样本来促进收敛并提升叛徒追踪性能。实验结果验证了所提方案在不同架构和数据集上的有效性。此外,对于在学习不同水印后可能在主任务上遭受更新不兼容的模型(例如包含批量归一化层的架构),我们提出的BlackCATT+FR在聚合器端通过一组辅助样本引入功能正则化,从而在不影响叛徒追踪性能的前提下促进模型副本间共享特征空间。