The recent developments in machine learning have highlighted a conflict between online platforms and their users in terms of privacy. The importance of user privacy and the struggle for power over user data has been intensified as regulators and operators attempt to police online platforms. As users have become increasingly aware of privacy issues, client-side data storage, management, and analysis have become a favoured approach to large-scale centralised machine learning. However, state-of-the-art machine learning methods require vast amounts of labelled user data, making them unsuitable for models that reside client-side and only have access to a single user's data. State-of-the-art methods are also computationally expensive, which degrades the user experience on compute-limited hardware and also reduces battery life. A recent alternative approach has proven remarkably successful in classification tasks across a wide variety of data -- using a compression-based distance measure (called normalised compression distance) to measure the distance between generic objects in classical distance-based machine learning methods. In this work, we demonstrate that the normalised compression distance is actually not a metric; develop it for the wider context of kernel methods to allow modelling of complex data; and present techniques to improve the training time of models that use this distance measure. We demonstrate that the normalised compression distance works as well as and sometimes better than other metrics and kernels -- while requiring only marginally more computational costs and in spite of the lack of formal metric properties. The end results is a simple model with remarkable accuracy even when trained on a very small number of samples allowing for models that are small and effective enough to run entirely on a client device using only user-supplied data.
翻译:近年来机器学习的发展突显了在线平台与其用户在隐私方面的冲突。随着监管机构和运营商试图监管在线平台,用户隐私的重要性以及对用户数据控制权的争夺日益加剧。随着用户对隐私问题的日益关注,客户端数据存储、管理和分析已成为替代大规模集中式机器学习的优选方案。然而,最先进的机器学习方法需要大量标注的用户数据,这使得它们不适合部署在客户端、仅能访问单个用户数据的模型。现有最优方法还存在计算成本高昂的问题,这不仅会降低计算能力受限硬件上的用户体验,还会缩短电池续航时间。近期一种替代方法在各类数据的分类任务中取得了显著成功——该方法采用基于压缩的距离度量(称为归一化压缩距离),在经典基于距离的机器学习方法中测量通用对象间的距离。在本研究中,我们证明了归一化压缩距离实际上并非度量标准;将其拓展至核方法的更广泛语境中以实现对复杂数据的建模;并提出改进使用该距离度量的模型训练时间的技术。我们证明归一化压缩距离与其他度量和核函数效果相当甚至更优——尽管仅需略微增加计算成本,且缺乏形式化的度量属性。最终成果是一个简单的模型,即使在极少量样本上训练也能获得显著精度,使得模型足够小巧高效,能够完全在客户端设备上仅使用用户提供的数据运行。