Evaluating models and datasets in computer vision remains a challenging task, with most leaderboards relying solely on accuracy. While accuracy is a popular metric for model evaluation, it provides only a coarse assessment by considering a single model's score on all dataset items. This paper explores Item Response Theory (IRT), a framework that infers interpretable latent parameters for an ensemble of models and each dataset item, enabling richer evaluation and analysis beyond the single accuracy number. Leveraging IRT, we assess model calibration, select informative data subsets, and demonstrate the usefulness of its latent parameters for analyzing and comparing models and datasets in computer vision.
翻译:计算机视觉领域中的模型与数据集评估仍是一项具有挑战性的任务,当前多数排行榜仅依赖准确率作为评价指标。尽管准确率是模型评估中广泛采用的度量标准,但其仅通过单一模型在所有数据集样本上的得分提供粗略评估,存在明显局限。本文探索项目反应理论(Item Response Theory, IRT)——一种能够为模型集合及每个数据集样本推断可解释潜在参数的框架,从而实现在单一准确率数值之外更丰富的评估与分析。通过运用IRT,我们评估了模型校准性能,筛选出信息量高的数据子集,并论证了其潜在参数在分析与比较计算机视觉模型及数据集方面的实用价值。