We generalize the leverage score sampling sketch for $\ell_2$-subspace embeddings, to accommodate sampling subsets of the transformed data, so that the sketching approach is appropriate for distributed settings. This is then used to derive an approximate coded computing approach for first-order methods; known as gradient coding, to accelerate linear regression in the presence of failures in distributed computational networks, \textit{i.e.} stragglers. We replicate the data across the distributed network, to attain the approximation guarantees through the induced sampling distribution. The significance and main contribution of this work, is that it unifies randomized numerical linear algebra with approximate coded computing, while attaining an induced $\ell_2$-subspace embedding through uniform sampling. The transition to uniform sampling is done without applying a random projection, as in the case of the subsampled randomized Hadamard transform. Furthermore, by incorporating this technique to coded computing, our scheme is an iterative sketching approach to approximately solving linear regression. We also propose weighting when sketching takes place through sampling with replacement, for further compression.
翻译:我们将用于$\ell_2$子空间嵌入的杠杆值采样草图推广至适应变换后数据的子集采样,使得该草图方法适用于分布式场景。随后,该方法被用于推导一阶方法的近似编码计算方案(称为梯度编码),以在分布式计算网络存在故障节点(即掉队者)时加速线性回归。我们在分布式网络中复制数据,通过诱导的采样分布来获得近似保证。本工作的核心意义与主要贡献在于,它将随机数值线性代数与近似编码计算相统一,同时通过均匀采样获得诱导的$\ell_2$子空间嵌入。此方法向均匀采样的过渡无需像子采样随机哈达玛变换那样施加随机投影。此外,通过将该技术融入编码计算,我们的方案形成了一种用于近似求解线性回归的迭代草图方法。我们还提出了在采用有放回采样进行草图构建时引入加权策略,以实现进一步的压缩。