More and more edge devices and mobile apps are leveraging deep learning (DL) capabilities. Deploying such models on devices -- referred to as on-device models -- rather than as remote cloud-hosted services, has gained popularity because it avoids transmitting user data off of the device and achieves high response time. However, on-device models can be easily attacked, as they can be accessed by unpacking corresponding apps and the model is fully exposed to attackers. Recent studies show that attackers can easily generate white-box-like attacks for an on-device model or even inverse its training data. To protect on-device models from white-box attacks, we propose a novel technique called model obfuscation. Specifically, model obfuscation hides and obfuscates the key information -- structure, parameters and attributes -- of models by renaming, parameter encapsulation, neural structure obfuscation obfuscation, shortcut injection, and extra layer injection. We have developed a prototype tool ModelObfuscator to automatically obfuscate on-device TFLite models. Our experiments show that this proposed approach can dramatically improve model security by significantly increasing the difficulty of parsing models inner information, without increasing the latency of DL models. Our proposed on-device model obfuscation has the potential to be a fundamental technique for on-device model deployment. Our prototype tool is publicly available at: https://github.com/zhoumingyi/ModelObfuscator.
翻译:越来越多的边缘设备和移动应用正在利用深度学习(DL)能力。将这类模型部署在设备上——称为设备端模型——而非远程云托管服务已日益普及,因为它避免了将用户数据从设备传输出去,同时能实现较高的响应时间。然而,设备端模型容易受到攻击,因为攻击者可通过解包相应应用程序来访问这些模型,且模型完全暴露于攻击者面前。最新研究表明,攻击者可以轻松针对设备端模型生成类似白盒的攻击,甚至逆向其训练数据。为了保护设备端模型免受白盒攻击,我们提出一项名为模型混淆的新技术。具体而言,模型混淆通过重命名、参数封装、神经网络结构混淆、快捷注入和额外层注入来隐藏并混淆模型的关键信息——结构、参数和属性。我们开发了一个原型工具ModelObfuscator,可自动混淆设备端TFLite模型。实验表明,该方法能在不增加DL模型延迟的情况下,显著提高解析模型内部信息的难度,从而大幅提升模型安全性。我们提出的设备端模型混淆技术有望成为设备端模型部署的基础性技术。我们的原型工具已在以下地址公开提供:https://github.com/zhoumingyi/ModelObfuscator。