Estimating population quantities such as mean outcomes from user feedback is fundamental to platform evaluation and social science, yet feedback is often missing not at random (MNAR): users with stronger opinions are more likely to respond, so standard estimators are biased and the estimand is not identified without additional assumptions. Existing approaches typically rely on strong parametric assumptions or bespoke auxiliary variables that may be unavailable in practice. In this paper, we develop a partial identification framework in which sharp bounds on the estimand are obtained by solving a pair of linear programs whose constraints encode the observed data structure. This formulation naturally incorporates outcome predictions from pretrained models, including large language models (LLMs), as additional linear constraints that tighten the feasible set. We call these predictions weak shadow variables: they satisfy a conditional independence assumption with respect to missingness but need not meet the completeness conditions required by classical shadow-variable methods. When predictions are sufficiently informative, the bounds collapse to a point, recovering standard identification as a special case. In finite samples, to provide valid coverage of the identified set, we propose a set-expansion estimator that achieves slower-than-$\sqrt{n}$ convergence rate in the set-identified regime and the standard $\sqrt{n}$ rate under point identification. In simulations and semi-synthetic experiments on customer-service dialogues, we find that LLM predictions are often ill-conditioned for classical shadow-variable methods yet remain highly effective in our framework. They shrink identification intervals by 75--83\% while maintaining valid coverage under realistic MNAR mechanisms.
翻译:从用户反馈中估计总体量(如平均结果)是平台评估和社会科学的基础,然而反馈数据往往并非随机缺失(MNAR):观点更强烈的用户更可能作出回应,导致标准估计量存在偏差,且在没有额外假设时待估参数不可识别。现有方法通常依赖强参数假设或定制化的辅助变量,这些在实践中可能难以获取。本文提出一种部分识别框架,通过求解一对线性规划问题获得待估参数的尖锐边界,其约束条件编码了观测数据结构。该框架自然地将预训练模型(包括大语言模型)的预测结果作为额外的线性约束纳入,从而收紧可行集。我们将这些预测称为弱影子变量:它们满足关于缺失机制的独立性假设,但无需满足经典影子变量方法所要求的完备性条件。当预测信息足够充分时,边界将收缩为单点,此时经典识别成为特例。在有限样本中,为保障对识别集的有效覆盖,我们提出一种集合扩张估计量,该估计量在集合识别机制下达到慢于$\sqrt{n}$的收敛速率,而在点识别机制下保持标准$\sqrt{n}$速率。在客户服务对话的模拟与半合成实验中,我们发现LLM预测虽常不满足经典影子变量方法的适用条件,但在本框架中仍高度有效:在现实MNAR机制下,它们能将识别区间缩小75-83%,同时保持有效覆盖。