We study computational aspects of algorithmic replicability, a notion of stability introduced by Impagliazzo, Lei, Pitassi, and Sorrell [2022]. Motivated by a recent line of work that established strong statistical connections between replicability and other notions of learnability such as online learning, private learning, and SQ learning, we aim to understand better the computational connections between replicability and these learning paradigms. Our first result shows that there is a concept class that is efficiently replicably PAC learnable, but, under standard cryptographic assumptions, no efficient online learner exists for this class. Subsequently, we design an efficient replicable learner for PAC learning parities when the marginal distribution is far from uniform, making progress on a question posed by Impagliazzo et al. [2022]. To obtain this result, we design a replicable lifting framework inspired by Blanc, Lange, Malik, and Tan [2023] that transforms in a black-box manner efficient replicable PAC learners under the uniform marginal distribution over the Boolean hypercube to replicable PAC learners under any marginal distribution, with sample and time complexity that depends on a certain measure of the complexity of the distribution. Finally, we show that any pure DP learner can be transformed to a replicable one in time polynomial in the accuracy, confidence parameters and exponential in the representation dimension of the underlying hypothesis class.
翻译:我们研究算法可复现性的计算特性,这是由Impagliazzo、Lei、Pitassi和Sorrell [2022] 提出的一种稳定性概念。受近期一系列工作的启发——这些工作建立了可复现性与在线学习、隐私学习及SQ学习等其他可学习性概念之间的强统计关联——我们旨在更深入地理解可复现性与这些学习范式之间的计算联系。我们的第一个结果表明,存在一个概念类可以被高效地可复现PAC学习,但在标准密码学假设下,该类不存在高效的在线学习器。随后,我们针对边缘分布远离均匀分布的情况,设计了一种用于PAC学习奇偶函数的高效可复现学习器,从而在Impagliazzo等人[2022]提出的问题上取得了进展。为实现这一结果,我们设计了一个受Blanc、Lange、Malik和Tan [2023] 启发的可复现提升框架,该框架以黑盒方式将布尔超立方体上均匀边缘分布下的高效可复现PAC学习器,转换为任意边缘分布下的可复现PAC学习器,其样本和时间复杂度取决于分布复杂度的某种度量。最后,我们证明任何纯差分隐私学习器都可以被转化为可复现学习器,其转化时间在精度参数和置信度参数上为多项式级,而在底层假设类的表示维度上为指数级。