Stochastic gradient descent (SGD) is a scalable and memory-efficient optimization algorithm for large datasets and stream data, which has drawn a great deal of attention and popularity. The applications of SGD-based estimators to statistical inference such as interval estimation have also achieved great success. However, most of the related works are based on i.i.d. observations or Markov chains. When the observations come from a mixing time series, how to conduct valid statistical inference remains unexplored. As a matter of fact, the general correlation among observations imposes a challenge on interval estimation. Most existing methods may ignore this correlation and lead to invalid confidence intervals. In this paper, we propose a mini-batch SGD estimator for statistical inference when the data is $φ$-mixing. The confidence intervals are constructed using an associated mini-batch bootstrap SGD procedure. Using ``independent block'' trick from \cite{yu1994rates}, we show that the proposed estimator is asymptotically normal, and its limiting distribution can be effectively approximated by the bootstrap procedure. The proposed method is memory-efficient and easy to implement in practice. Simulation studies on synthetic data and an application to a real-world dataset confirm our theory.
翻译:随机梯度下降(SGD)是一种适用于大规模数据集和流数据的可扩展且内存高效的优化算法,已引起广泛关注并广受欢迎。基于SGD的估计器在统计推断(如区间估计)中的应用也取得了巨大成功。然而,大多数相关工作基于独立同分布观测或马尔可夫链。当观测数据来自混合时间序列时,如何进行有效的统计推断仍未被探索。事实上,观测值之间普遍存在的相关性给区间估计带来了挑战。大多数现有方法可能忽略这种相关性,导致置信区间无效。本文针对数据为φ混合的情况,提出了一种用于统计推断的小批量SGD估计器。置信区间通过关联的小批量自助法SGD程序构建。利用\cite{yu1994rates}中的“独立块”技巧,我们证明了所提估计器具有渐近正态性,且其极限分布可通过自助程序有效逼近。所提方法内存效率高,在实践中易于实现。在合成数据上的模拟研究以及对真实世界数据集的应用验证了我们的理论。