We propose a new weighted average estimator for the high dimensional parameters under the distributed learning system, in which the weight assigned to each coordinate is precisely proportional to the inverse of the variance of the local estimates for that coordinate. This strategy empowers the new estimator to achieve a minimal mean squared error, comparable to the current state-of-the-art one-shot distributed learning methods. While at the same time, the new weighting approach maintains remarkably low communication costs, as each agent is required to transmit only two vectors to the central server. As a result, the newly proposed method achieves optimal statistical efficiency while significantly reducing communication overhead. We further demonstrate the effectiveness of the new estimator by investigating the error bound and the asymptotic properties of the estimation, as well as the numerical performance on some simulated examples and a real data analysis.
翻译:我们提出了一种适用于分布式学习系统的高维参数加权平均估计器,其中每个坐标分配的权重精确地与该坐标局部估计方差的倒数成正比。这一策略使新估计器能够达到最小的均方误差,其性能可与当前最先进的单次分布式学习方法相媲美。同时,新的加权方法保持了极低的通信成本,因为每个智能体仅需向中央服务器传输两个向量。因此,新提出的方法在实现最优统计效率的同时,显著降低了通信开销。我们通过研究估计误差界和渐近性质,以及在若干模拟示例和真实数据分析中的数值表现,进一步验证了新估计器的有效性。