While synthetic data hold great promise for privacy protection, their statistical analysis poses significant challenges that necessitate innovative solutions. The use of deep generative models (DGMs) for synthetic data generation is known to induce considerable bias and imprecision into synthetic data analyses, compromising their inferential utility as opposed to original data analyses. This bias and uncertainty can be substantial enough to impede statistical convergence rates, even in seemingly straightforward analyses like mean calculation. The standard errors of such estimators then exhibit slower shrinkage with sample size than the typical 1 over root-$n$ rate. This complicates fundamental calculations like p-values and confidence intervals, with no straightforward remedy currently available. In response to these challenges, we propose a new strategy that targets synthetic data created by DGMs for specific data analyses. Drawing insights from debiased and targeted machine learning, our approach accounts for biases, enhances convergence rates, and facilitates the calculation of estimators with easily approximated large sample variances. We exemplify our proposal through a simulation study on toy data and two case studies on real-world data, highlighting the importance of tailoring DGMs for targeted data analysis. This debiasing strategy contributes to advancing the reliability and applicability of synthetic data in statistical inference.
翻译:尽管合成数据在隐私保护方面展现出巨大潜力,但其统计分析仍面临重大挑战,亟需创新性解决方案。使用深度生成模型(DGMs)生成合成数据,已知会在合成数据分析中引入显著偏差与不精确性,从而损害其相较于原始数据分析的推断效用。这种偏差与不确定性可能严重到足以阻碍统计收敛速率,即使在均值计算这类看似简单的分析中也是如此。此类估计量的标准误随样本量收缩的速度,将慢于典型的1/根号n速率。这使得p值与置信区间等基础计算变得复杂,且目前尚无直接解决方案。针对这些挑战,我们提出一种面向特定数据分析任务、针对DGMs所生成合成数据的新策略。借鉴去偏与目标机器学习的思想,我们的方法能够校正偏差、提升收敛速率,并便于计算具有易于近似大样本方差的估计量。我们通过玩具数据的模拟研究及两个真实数据案例研究,展示了该方案的有效性,凸显了针对目标数据分析任务定制DGMs的重要性。这一去偏策略有助于提升合成数据在统计推断中的可靠性与适用性。