We investigate the application of randomized quasi-Monte Carlo (RQMC) methods in random feature approximations for kernel-based learning. Compared to the classical Monte Carlo (MC) approach \citep{rahimi2007random}, RQMC improves the deterministic approximation error bound from $O_P(1/\sqrt{n})$ to $O(1/M)$ (up to logarithmic factors), matching the rate achieved by quasi-Monte Carlo (QMC) methods \citep{huangquasi}. Beyond the deterministic error bound guarantee, we further establish additional average error bounds for RQMC features: some requiring weaker assumptions and others significantly reducing the exponent of the logarithmic factor. In the context of kernel ridge regression, we show that RQMC features offer computational advantages over MC features while preserving the same statistical error rate. Empirical results further show that RQMC methods maintain stable performance in both low and moderately high-dimensional settings, unlike QMC methods, which suffer from significant performance degradation as dimension increases.
翻译:本文研究了随机拟蒙特卡洛方法在基于核学习的随机特征近似中的应用。与经典蒙特卡洛方法相比,RQMC 将确定性近似误差界从 $O_P(1/\sqrt{n})$ 改进至 $O(1/M)$(忽略对数因子),达到了拟蒙特卡洛方法所实现的收敛速率。除确定性误差界保证外,我们进一步建立了 RQMC 特征的多项平均误差界:部分结果仅需更弱的假设条件,另一些则显著降低了对数因子的指数阶。在核岭回归的框架下,我们证明 RQMC 特征在保持相同统计误差率的同时,相比 MC 特征具有计算优势。实证结果进一步表明,与在高维场景下性能显著下降的 QMC 方法不同,RQMC 方法在低维及中高维设定下均能保持稳定的性能表现。