In the rapidly evolving internet-of-things (IoT) ecosystem, effective data analysis techniques are crucial for handling distributed data generated by sensors. Addressing the limitations of existing methods, such as the sub-gradient approach, which fails to distinguish between active and non-active coefficients effectively, this paper introduces the decentralized smoothing alternating direction method of multipliers (DSAD) for penalized quantile regression. Our method leverages non-convex sparse penalties like the minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD), improving the identification and retention of significant predictors. DSAD incorporates a total variation norm within a smoothing ADMM framework, achieving consensus among distributed nodes and ensuring uniform model performance across disparate data sources. This approach overcomes traditional convergence challenges associated with non-convex penalties in decentralized settings. We present theoretical proofs and extensive simulation results to validate the effectiveness of the DSAD, demonstrating its superiority in achieving reliable convergence and enhancing estimation accuracy compared with prior methods.
翻译:在快速发展的物联网(IoT)生态系统中,高效的数据分析技术对于处理传感器产生的分布式数据至关重要。针对现有方法(如次梯度法)无法有效区分活跃与非活跃系数的局限性,本文提出了用于惩罚分位数回归的去中心化平滑交替方向乘子法(DSAD)。该方法利用极小极大凹惩罚(MCP)和平滑剪切绝对偏差(SCAD)等非凸稀疏惩罚项,提升了对重要预测变量的识别与保留能力。DSAD在平滑ADMM框架中引入了全变差范数,实现了分布式节点间的共识,并确保了不同数据源间模型性能的一致性。这一方法克服了去中心化环境下非凸惩罚项带来的传统收敛难题。我们提供了理论证明与大量仿真结果以验证DSAD的有效性,结果表明相较于现有方法,DSAD在实现可靠收敛与提升估计精度方面具有显著优势。