Comparative Judgement (CJ) provides an alternative assessment approach by evaluating work holistically rather than breaking it into discrete criteria. This method leverages human ability to make nuanced comparisons, yielding more reliable and valid assessments. CJ aligns with real-world evaluations, where overall quality emerges from the interplay of various elements. However, rubrics remain widely used in education, offering structured criteria for grading and detailed feedback. This creates a gap between CJ's holistic ranking and the need for criterion-based performance breakdowns. This paper addresses this gap using a Bayesian approach. We build on Bayesian CJ (BCJ) by Gray et al., which directly models preferences instead of using likelihoods over total scores, allowing for expected ranks with uncertainty estimation. Their entropy-based active learning method selects the most informative pairwise comparisons for assessors. We extend BCJ to handle multiple independent learning outcome (LO) components, defined by a rubric, enabling both holistic and component-wise predictive rankings with uncertainty estimates. Additionally, we propose a method to aggregate entropies and identify the most informative comparison for assessors. Experiments on synthetic and real data demonstrate our method's effectiveness. Finally, we address a key limitation of BCJ, which is the inability to quantify assessor agreement. We show how to derive agreement levels, enhancing transparency in assessment.
翻译:比较评判(CJ)通过整体性评估而非分解为离散准则的方式,提供了一种替代性评估方法。该方法利用人类进行细致比较的能力,从而产生更可靠且有效的评估结果。CJ与现实世界中的评估方式相一致,即整体质量源于多种要素的相互作用。然而,评分量规在教育领域仍被广泛使用,它为评分和详细反馈提供了结构化的准则。这在CJ的整体性排名与基于准则的表现分解需求之间形成了差距。本文采用贝叶斯方法来解决这一差距。我们在Gray等人提出的贝叶斯CJ(BCJ)基础上进行扩展,该方法直接对偏好进行建模,而非使用总分似然,从而能够进行带有不确定性估计的期望排名预测。其基于信息熵的主动学习方法为评估者选择信息量最大的成对比较。我们将BCJ扩展至处理多个独立的学习成果(LO)组件,这些组件由评分量规定义,从而能够同时进行整体性和组件级的、带有不确定性估计的预测排名。此外,我们提出了一种聚合信息熵并识别对评估者信息量最大的比较的方法。在合成数据和真实数据上的实验证明了我们方法的有效性。最后,我们解决了BCJ的一个关键局限,即无法量化评估者间的一致性。我们展示了如何推导一致性水平,从而增强了评估过程的透明度。