Natural Language Processing (NLP) operations, such as semantic sentiment analysis and text synthesis, may often impair users' privacy and demand significant on device computational resources. Centralized learning (CL) on the edge offers an alternative energy-efficient approach, yet requires the collection of raw information, which affects the user's privacy. While Federated learning (FL) preserves privacy, it requires high computational energy on board tiny user devices. We introduce split learning (SL) as an energy-efficient alternative, privacy-preserving tiny machine learning (TinyML) scheme and compare it to FL and CL in the presence of Rayleigh fading and additive noise. Our results show that SL reduces processing power and CO2 emissions while maintaining high accuracy, whereas FL offers a balanced compromise between efficiency and privacy. Hence, this study provides insights into deploying energy-efficient, privacy-preserving NLP models on edge devices.
翻译:自然语言处理(NLP)操作,如语义情感分析和文本合成,常常可能损害用户隐私并需要大量的设备端计算资源。边缘端的集中式学习(CL)提供了一种替代的节能方法,但需要收集原始信息,这影响了用户的隐私。虽然联邦学习(FL)能够保护隐私,但它需要在微小的用户设备上消耗较高的计算能量。我们引入分割学习(SL)作为一种节能的替代方案,这是一种保护隐私的微型机器学习(TinyML)方案,并在存在瑞利衰落和加性噪声的情况下将其与FL和CL进行了比较。我们的结果表明,SL在保持高准确性的同时,降低了处理能力和二氧化碳排放,而FL则在效率和隐私之间提供了平衡的折衷。因此,本研究为在边缘设备上部署节能且保护隐私的NLP模型提供了见解。