In applied statistics and machine learning, the "gold standards" used for training are often biased and almost always noisy. Dawid and Skene's justifiably popular crowdsourcing model adjusts for rater (coder, annotator) sensitivity and specificity, but fails to capture distributional properties of rating data gathered for training, which in turn biases training. In this study, we introduce a general purpose measurement-error model with which we can infer consensus categories by adding item-level effects for difficulty, discriminativeness, and guessability. We further show how to constrain the bimodal posterior of these models to avoid (or if necessary, allow) adversarial raters. We validate our model's goodness of fit with posterior predictive checks, the Bayesian analogue of $\chi^2$ tests. Dawid and Skene's model is rejected by goodness of fit tests, whereas our new model, which adjusts for item heterogeneity, is not rejected. We illustrate our new model with two well-studied data sets, binary rating data for caries in dental X-rays and implication in natural language.
翻译:在应用统计学与机器学习领域,用于训练的“黄金标准”数据往往存在偏差且几乎总是包含噪声。Dawid和Skene提出的众包模型虽然合理考虑了评分者(编码者、标注者)的敏感性与特异性,但未能捕捉训练评级数据的分布特性,从而导致训练偏差。本研究提出一种通用测量误差模型,通过引入项目层面的难度、区分度与可猜测性效应,实现共识类别的推断。我们进一步展示了如何约束此类模型的双峰后验分布,以规避(或在必要时允许)对抗性评分者。通过后验预测检验(贝叶斯框架下的$\chi^2$检验等效方法),我们验证了模型拟合优度。拟合优度检验拒绝了Dawid-Skene模型,而我们所提出的、针对项目异质性进行调整的新模型未被拒绝。最后,我们通过两个经典数据集(牙科X光片中龋齿的二元评级数据与自然语言蕴涵任务)对新模型进行了实证阐释。