Wearable data serves various health monitoring purposes, such as determining activity states based on user behavior and providing tailored exercise recommendations. However, the individual data perception and computational capabilities of wearable devices are limited, often necessitating the joint training of models across multiple devices. Federated Human Activity Recognition (HAR) presents a viable research avenue, allowing for global model training without the need to upload users' local activity data. Nonetheless, recent studies have revealed significant privacy concerns persisting within federated learning frameworks. To address this gap, we focus on investigating privacy leakage issues within federated user behavior recognition modeling across multiple wearable devices. Our proposed system entails a federated learning architecture comprising $N$ wearable device users and a parameter server, which may exhibit curiosity in extracting sensitive user information from model parameters. Consequently, we consider a membership inference attack based on a malicious server, leveraging differences in model generalization across client data. Experimentation conducted on five publicly available HAR datasets demonstrates an accuracy rate of 92\% for malicious server-based membership inference. Our study provides preliminary evidence of substantial privacy risks associated with federated training across multiple wearable devices, offering a novel research perspective within this domain.
翻译:可穿戴数据服务于多种健康监测目的,例如基于用户行为确定活动状态并提供个性化的运动建议。然而,可穿戴设备的个体数据感知与计算能力有限,通常需要跨多个设备进行模型的联合训练。联邦人体活动识别(HAR)提供了一个可行的研究途径,允许在不需上传用户本地活动数据的情况下进行全局模型训练。尽管如此,近期研究揭示了联邦学习框架内仍持续存在的重大隐私隐患。为弥补这一研究空白,我们专注于调查跨多个可穿戴设备的联邦用户行为识别建模中的隐私泄露问题。我们提出的系统包含一个由$N$个可穿戴设备用户和一个参数服务器组成的联邦学习架构,该服务器可能表现出从模型参数中提取敏感用户信息的好奇心。因此,我们考虑一种基于恶意服务器的成员推理攻击,利用客户端数据间模型泛化能力的差异。在五个公开可用的HAR数据集上进行的实验表明,基于恶意服务器的成员推理准确率达到92\%。我们的研究为跨多个可穿戴设备的联邦训练所伴随的重大隐私风险提供了初步证据,为该领域提供了一个新颖的研究视角。