In the rapidly evolving domain of satellite communications, integrating advanced machine learning techniques, particularly split learning, is crucial for enhancing data processing and model training efficiency across satellites, space stations, and ground stations. Traditional ML approaches often face significant challenges within satellite networks due to constraints such as limited bandwidth and computational resources. To address this gap, we propose a novel framework for more efficient SL in satellite communications. Our approach, Dynamic Topology Informed Pruning, namely DTIP, combines differential privacy with graph and model pruning to optimize graph neural networks for distributed learning. DTIP strategically applies differential privacy to raw graph data and prunes GNNs, thereby optimizing both model size and communication load across network tiers. Extensive experiments across diverse datasets demonstrate DTIP's efficacy in enhancing privacy, accuracy, and computational efficiency. Specifically, on Amazon2M dataset, DTIP maintains an accuracy of 0.82 while achieving a 50% reduction in floating-point operations per second. Similarly, on ArXiv dataset, DTIP achieves an accuracy of 0.85 under comparable conditions. Our framework not only significantly improves the operational efficiency of satellite communications but also establishes a new benchmark in privacy-aware distributed learning, potentially revolutionizing data handling in space-based networks.
翻译:在快速发展的卫星通信领域,集成先进的机器学习技术,特别是分割学习,对于提升卫星、空间站和地面站之间的数据处理和模型训练效率至关重要。传统的机器学习方法在卫星网络中常面临重大挑战,这源于诸如有限带宽和计算资源等约束。为弥补这一不足,我们提出了一种新颖的框架,旨在实现卫星通信中更高效的分割学习。我们的方法——动态拓扑信息剪枝,即DTIP,将差分隐私与图剪枝和模型剪枝相结合,以优化用于分布式学习的图神经网络。DTIP策略性地将差分隐私应用于原始图数据并对GNN进行剪枝,从而优化跨网络层级的模型大小和通信负载。在不同数据集上进行的大量实验证明了DTIP在增强隐私性、准确性和计算效率方面的有效性。具体而言,在Amazon2M数据集上,DTIP保持了0.82的准确率,同时实现了每秒浮点运算次数减少50%。类似地,在ArXiv数据集上,DTIP在可比条件下达到了0.85的准确率。我们的框架不仅显著提高了卫星通信的运行效率,而且在隐私感知分布式学习领域树立了新的标杆,有望彻底改变天基网络中的数据管理方式。