Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate "backdoor key", the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees. First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given black-box access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm or in Random ReLU networks. In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is "clean" or contains a backdoor. Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, our construction can produce a classifier that is indistinguishable from an "adversarially robust" classifier, but where every input has an adversarial example! In summary, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.
翻译:鉴于训练机器学习模型所需的计算成本和技术专长,用户可能将学习任务委托给服务提供商。我们展示了恶意学习方如何在分类器中植入不可检测的后门。从表面上看,这种后门分类器表现正常,但实际上,学习方保留了一种仅需微小扰动即可改变任意输入分类的机制。关键之处在于,若无相应的"后门密钥",该机制将被隐藏,任何计算受限的观察者都无法检测到。我们展示了两种具有不同保证特性的不可检测后门植入框架。首先,我们展示了如何利用数字签名方案在任何模型中植入后门。该构造保证:即使同时获得原始模型和后门版本的黑盒访问权限,在计算上也不可能找到任何一个使两者输出产生差异的输入。这一特性意味着后门模型具有与原始模型相当的泛化误差。其次,我们演示了如何在基于随机傅里叶特征学习范式的模型或随机ReLU网络中植入不可检测的后门。在此构造中,不可检测性可抵御强大的白盒区分器:即使获得网络的完整描述和训练数据,任何高效区分器都无法判断模型是"干净"的还是包含后门。我们对不可检测后门的构造也揭示了对抗样本鲁棒性这一相关问题。具体而言,我们的构造可以生成一个与"对抗鲁棒"分类器无法区分的分类器,但其中每个输入都存在对抗样本!总之,不可检测后门的存在为认证对抗鲁棒性构成了重大的理论障碍。