Estimating the Riesz representer is central to debiased machine learning for causal and structural parameter estimation. We propose generalized Riesz regression, a unified framework that estimates the Riesz representer by fitting a representer model via Bregman divergence minimization. This framework includes the squared loss and the Kullback--Leibler (KL) divergence as special cases: the former recovers Riesz regression, while the latter recovers tailored loss minimization. Under suitable model specifications, the dual problems correspond to covariate balancing, which we call automatic covariate balancing. Moreover, under the same specifications, outcome averages weighted by the estimated Riesz representer satisfy Neyman orthogonality even without estimating the regression function, a property we call automatic Neyman orthogonalization. This property not only reduces the estimation error of Neyman orthogonal scores but also clarifies a key distinction between debiased machine learning and targeted maximum likelihood estimation. Our framework can also be viewed as a generalization of density ratio fitting under Bregman divergences to Riesz representer estimation, and it applies beyond density ratio estimation. We provide convergence analyses for both reproducing kernel Hilbert space (RKHS) and neural network model classes. A Python package for generalized Riesz regression is available at https://github.com/MasaKat0/grr.
翻译:Riesz表示元的估计是因果与结构参数估计中实现去偏机器学习的核心。本文提出广义Riesz回归——一个通过最小化Bregman散度来拟合表示元模型的统一框架,用于估计Riesz表示元。该框架将平方损失与Kullback-Leibler(KL)散度作为特例:前者可还原为经典Riesz回归,后者则对应定制损失最小化方法。在适当的模型设定下,其对偶问题等价于协变量平衡,我们称之为自动协变量平衡。此外,在相同设定下,由估计的Riesz表示元加权的结果均值即使在不估计回归函数时也满足Neyman正交性,这一性质我们称为自动Neyman正交化。该性质不仅能降低Neyman正交得分的估计误差,同时阐明了去偏机器学习与目标最大似然估计的关键区别。本框架亦可视为Bregman散度下密度比拟合方法向Riesz表示元估计的推广,其应用范围超越密度比估计范畴。我们为再生核希尔伯特空间(RKHS)与神经网络模型类提供了收敛性分析。广义Riesz回归的Python软件包发布于https://github.com/MasaKat0/grr。