A well-known problem when learning from user clicks are inherent biases prevalent in the data, such as position or trust bias. Click models are a common method for extracting information from user clicks, such as document relevance in web search, or to estimate click biases for downstream applications such as counterfactual learning-to-rank, ad placement, or fair ranking. Recent work shows that the current evaluation practices in the community fail to guarantee that a well-performing click model generalizes well to downstream tasks in which the ranking distribution differs from the training distribution, i.e., under covariate shift. In this work, we propose an evaluation metric based on conditional independence testing to detect a lack of robustness to covariate shift in click models. We introduce the concept of debiasedness in click modeling and derive a metric for measuring it. In extensive semi-synthetic experiments, we show that our proposed metric helps to predict the downstream performance of click models under covariate shift and is useful in an off-policy model selection setting.
翻译:从用户点击数据中学习时,一个众所周知的问题是数据中普遍存在的固有偏差,例如位置偏差或信任偏差。点击模型是从用户点击中提取信息的常用方法,例如用于获取网络搜索中的文档相关性,或为下游应用(如反事实学习排序、广告投放或公平排序)估计点击偏差。近期研究表明,当前学术界的评估实践无法保证性能良好的点击模型能够很好地泛化到排序分布与训练分布不同的下游任务中,即在协变量偏移的情况下。本文提出一种基于条件独立性检验的评估指标,用于检测点击模型对协变量偏移缺乏鲁棒性的情况。我们引入了点击建模中的去偏概念,并推导出相应的度量指标。在大量半合成实验中,我们证明所提出的指标有助于预测点击模型在协变量偏移下的下游性能,并在离线策略模型选择场景中具有实用价值。