Active Learning (AL) deals with identifying the most informative samples for labeling to reduce data annotation costs for supervised learning tasks. AL research suffers from the fact that lifts from literature generalize poorly and that only a small number of repetitions of experiments are conducted. To overcome these obstacles, we propose CDALBench, the first active learning benchmark which includes tasks in computer vision, natural language processing and tabular learning. Furthermore, by providing an efficient, greedy oracle, CDALBench can be evaluated with 50 runs for each experiment. We show, that both the cross-domain character and a large amount of repetitions are crucial for sophisticated evaluation of AL research. Concretely, we show that the superiority of specific methods varies over the different domains, making it important to evaluate Active Learning with a cross-domain benchmark. Additionally, we show that having a large amount of runs is crucial. With only conducting three runs as often done in the literature, the superiority of specific methods can strongly vary with the specific runs. This effect is so strong, that, depending on the seed, even a well-established method's performance can be significantly better and significantly worse than random for the same dataset.
翻译:主动学习(Active Learning,AL)旨在识别最具信息量的样本进行标注,以降低监督学习任务的数据标注成本。当前主动学习研究面临两大挑战:文献中报道的性能提升难以泛化,且实验重复次数普遍不足。为克服这些障碍,我们提出了首个涵盖计算机视觉、自然语言处理和表格学习任务的主动学习基准测试框架CDALBench。该框架通过提供高效的贪心预言机,支持每个实验进行50次独立运行。我们证明,跨领域特性和大量重复实验对于主动学习研究的严谨评估至关重要。具体而言,研究表明特定方法的优越性在不同领域存在差异,这凸显了采用跨领域基准评估主动学习的必要性。此外,大量实验重复也极为关键:若仅进行文献中常见的3次实验,特定方法的优越性会因随机种子产生剧烈波动。这种影响如此显著,以至于对于同一数据集,经典方法的性能可能因随机种子不同,既显著优于随机选择,又显著劣于随机选择。