We propose SLIM (Stochastic Learning and Inference in overidentified Models), a scalable stochastic approximation framework for nonlinear GMM. SLIM forms iterative updates from independent mini-batches of moments and their derivatives, producing unbiased directions that ensure almost-sure convergence. It requires neither a consistent initial estimator nor global convexity and accommodates both fixed-sample and random-sampling asymptotics. We further develop an optional second-order refinement and inference procedures based on random scaling and plug-in methods, including plug-in, debiased plug-in, and online versions of the Sargan--Hansen $J$-test tailored to stochastic learning. In Monte Carlo experiments based on a nonlinear EASI demand system with 576 moment conditions, 380 parameters, and $n = 10^5$, SLIM solves the model in under 1.4 hours, whereas full-sample GMM in Stata on a powerful laptop converges only after 18 hours. The debiased plug-in $J$-test delivers satisfactory finite-sample inference, and SLIM scales smoothly to $n = 10^6$.
翻译:我们提出SLIM(超识别模型中的随机学习与推断),一种用于非线性广义矩估计的可扩展随机逼近框架。SLIM利用独立小批量矩条件及其导数构建迭代更新,产生确保几乎必然收敛的无偏方向。该方法既不需要一致的初始估计量,也不要求全局凸性,并能兼容固定样本与随机抽样的渐近理论。我们进一步开发了可选的二阶修正以及基于随机缩放与插件法的推断程序,包括专为随机学习设计的Sargan-Hansen $J$检验的插件版本、去偏插件版本及在线版本。在基于包含576个矩条件、380个参数及$n = 10^5$样本量的非线性EASI需求系统的蒙特卡洛实验中,SLIM在1.4小时内即可求解模型,而高性能笔记本电脑上Stata软件的全样本广义矩估计需运行18小时才收敛。去偏插件$J$检验提供了令人满意的有限样本推断性能,且SLIM可平稳扩展至$n = 10^6$样本规模。