Model personalization allows a set of individuals, each facing a different learning task, to train models that are more accurate for each person than those they could develop individually. The goals of personalization are captured in a variety of formal frameworks, such as multitask learning and metalearning. Combining data for model personalization poses risks for privacy because the output of an individual's model can depend on the data of other individuals. In this work we undertake a systematic study of differentially private personalized learning. Our first main contribution is to construct a taxonomy of formal frameworks for private personalized learning. This taxonomy captures different formal frameworks for learning as well as different threat models for the attacker. Our second main contribution is to prove separations between the personalized learning problems corresponding to different choices. In particular, we prove a novel separation between private multitask learning and private metalearning.
翻译:模型个性化使得面临不同学习任务的个体能够训练出比各自独立开发更精确的模型。个性化目标通过多种形式化框架实现,例如多任务学习与元学习。然而,为模型个性化整合数据会带来隐私风险,因为个体模型的输出可能依赖于其他个体的数据。本研究对差分隐私个性化学习进行了系统性探讨。我们的首要贡献是构建了隐私个性化学习形式化框架的分类体系,该体系涵盖了不同的学习形式化框架及攻击者的多种威胁模型。我们的第二项主要贡献是证明了对应不同框架选择的个性化学习问题之间存在本质差异。特别地,我们首次证明了隐私多任务学习与隐私元学习之间存在新的分离关系。