We study differentially private (DP) stochastic optimization (SO) with loss functions whose worst-case Lipschitz parameter over all data may be extremely large or infinite. To date, the vast majority of work on DP SO assumes that the loss is uniformly Lipschitz continuous (i.e. stochastic gradients are uniformly bounded) over data. While this assumption is convenient, it often leads to pessimistic risk bounds. In many practical problems, the worst-case (uniform) Lipschitz parameter of the loss over all data may be huge due to outliers and/or heavy-tailed data. In such cases, the risk bounds for DP SO, which scale with the worst-case Lipschitz parameter, are vacuous. To address these limitations, we provide improved risk bounds that do not depend on the uniform Lipschitz parameter. Following a recent line of work [WXDX20, KLZ22], we assume that stochastic gradients have bounded $k$-th order moments for some $k \geq 2$. Compared with works on uniformly Lipschitz DP SO, our risk bounds scale with the $k$-th moment instead of the uniform Lipschitz parameter of the loss, allowing for significantly faster rates in the presence of outliers and/or heavy-tailed data. For smooth convex loss functions, we provide linear-time algorithms with state-of-the-art excess risk. We complement our excess risk upper bounds with novel lower bounds. In certain parameter regimes, our linear-time excess risk bounds are minimax optimal. Second, we provide the first algorithm to handle non-smooth convex loss functions. To do so, we develop novel algorithmic and stability-based proof techniques, which we believe will be useful for future work in obtaining optimal excess risk. Finally, our work is the first to address non-convex non-uniformly Lipschitz loss functions satisfying the Proximal-PL inequality; this covers some practical machine learning models. Our Proximal-PL algorithm has near-optimal excess risk.
翻译:我们研究差分隐私(DP)随机优化(SO)问题,其损失函数在所有数据上的最坏情况Lipschitz参数可能极大或无穷。迄今为止,绝大多数关于DP SO的研究均假设损失在数据上是一致Lipschitz连续的(即随机梯度一致有界)。虽然这一假设便于处理,但通常会导致悲观的风险界。在许多实际问题中,由于异常值和/或重尾数据的存在,损失在所有数据上的最坏情况(一致)Lipschitz参数可能非常巨大。在此类情况下,与最坏情况Lipschitz参数成比例的DP SO风险界将失去意义。为解决这些局限性,我们提供了不依赖于一致Lipschitz参数的改进风险界。遵循近期的一系列工作[WXDX20, KLZ22],我们假设随机梯度对某个 $k \geq 2$ 具有有界的 $k$ 阶矩。与针对一致Lipschitz DP SO的工作相比,我们的风险界与损失的 $k$ 阶矩(而非一致Lipschitz参数)成比例,从而在存在异常值和/或重尾数据时能够实现显著更快的收敛速率。对于光滑凸损失函数,我们提供了具有最先进超额风险的线性时间算法。我们通过新颖的下界结果来补充我们的超额风险上界。在某些参数范围内,我们的线性时间超额风险界是极小极大最优的。其次,我们提出了首个能够处理非光滑凸损失函数的算法。为此,我们开发了新颖的算法和基于稳定性的证明技术,我们相信这些技术对于未来获得最优超额风险的工作将有所助益。最后,我们的工作首次处理了满足近端-PL不等式的非凸非一致Lipschitz损失函数;这涵盖了一些实际的机器学习模型。我们的近端-PL算法具有接近最优的超额风险。