With the growing demand for device-free and privacy-preserving sensing solutions, Wi-Fi sensing has emerged as a promising approach for human pose estimation (HPE). However, existing methods often process vast amounts of channel state information (CSI) data directly, ultimately straining networking resources. This paper introduces TinySense, an efficient compression framework that enhances the scalability of Wi-Fi-based human sensing. Our approach is based on a new vector quantization-based generative adversarial network (VQGAN). Specifically, by leveraging a VQGAN-learned codebook, TinySense significantly reduces CSI data while maintaining the accuracy required for reliable HPE. To optimize compression, we employ the K-means algorithm to dynamically adjust compression bitrates to cluster a large-scale pre-trained codebook into smaller subsets. Furthermore, a Transformer model is incorporated to mitigate bitrate loss, enhancing robustness in unreliable networking conditions. We prototype TinySense on an experimental testbed using Jetson Nano and Raspberry Pi to measure latency and network resource use. Extensive results demonstrate that TinySense significantly outperforms state-of-the-art compression schemes, achieving up to 1.5x higher HPE accuracy score (PCK20) under the same compression rate. It also reduces latency and networking overhead, respectively, by up to 5x and 2.5x. The code repository is available online at here.
翻译:随着对无设备且保护隐私的感知解决方案需求日益增长,Wi-Fi感知已成为人体姿态估计(HPE)的一种前景广阔的技术途径。然而,现有方法通常直接处理海量的信道状态信息(CSI)数据,最终对网络资源造成巨大压力。本文提出TinySense,一种高效的压缩框架,旨在提升基于Wi-Fi的人体感知的可扩展性。我们的方法基于一种新型的基于向量量化的生成对抗网络(VQGAN)。具体而言,通过利用VQGAN学习得到的码本,TinySense在显著压缩CSI数据的同时,保持了可靠HPE所需的精度。为优化压缩性能,我们采用K-means算法动态调整压缩比特率,将大规模预训练码本聚类为更小的子集。此外,框架中集成了Transformer模型以缓解比特率损失,从而增强在不可靠网络条件下的鲁棒性。我们在由Jetson Nano和树莓派搭建的实验平台上对TinySense进行原型实现,以测量其延迟和网络资源使用情况。大量实验结果表明,TinySense显著优于现有最先进的压缩方案,在相同压缩率下实现了高达1.5倍的HPE精度得分(PCK20)提升。同时,其延迟和网络开销分别降低了最高5倍和2.5倍。代码仓库已在线公开,详见此处。