Large Language Models (LLMs) have demonstrated strong performance across diverse tasks, but fine-tuning them typically relies on cloud-based, centralized infrastructures. This requires data owners to upload potentially sensitive data to external servers, raising serious privacy concerns. An alternative approach is to fine-tune LLMs directly on edge devices using local data; however, this introduces a new challenge: the model owner must transfer proprietary models to the edge, which risks intellectual property (IP) leakage. To address this dilemma, we propose DistilLock, a TEE-assisted fine-tuning framework that enables privacy-preserving knowledge distillation on the edge. In DistilLock, a proprietary foundation model is executed within a trusted execution environment (TEE) enclave on the data owner's device, acting as a secure black-box teacher. This setup preserves both data privacy and model IP by preventing direct access to model internals. Furthermore, DistilLock employs a model obfuscation mechanism to offload obfuscated weights to untrusted accelerators for efficient knowledge distillation without compromising security. We demonstrate that DistilLock prevents unauthorized knowledge distillation processes and model-stealing attacks while maintaining high computational efficiency, but offering a secure and practical solution for edge-based LLM personalization.
翻译:大语言模型(LLM)已在多种任务中展现出卓越性能,但其微调通常依赖于基于云的集中式基础设施。这要求数据所有者将可能包含敏感信息的数据上传至外部服务器,引发了严重的隐私担忧。另一种方法是在边缘设备上直接使用本地数据对LLM进行微调;然而,这带来了新的挑战:模型所有者需将专有模型传输至边缘端,从而面临知识产权(IP)泄露的风险。为解决这一困境,我们提出DistilLock——一种基于可信执行环境(TEE)的辅助微调框架,可在边缘端实现隐私保护的知识蒸馏。在DistilLock中,专有的基础模型在数据所有者设备上的TEE安全飞地内执行,充当安全的黑盒教师模型。该架构通过阻止对模型内部结构的直接访问,同时保护了数据隐私与模型知识产权。此外,DistilLock采用模型混淆机制,将混淆后的权重卸载至不可信加速器以进行高效的知识蒸馏,且不损害安全性。我们证明,DistilLock能够有效防御未经授权的知识蒸馏流程与模型窃取攻击,在保持高计算效率的同时,为基于边缘的LLM个性化提供了安全实用的解决方案。