How can we test AI performance? This question seems trivial, but it isn't. Standard benchmarks often have problems such as in-distribution and small-size test sets, oversimplified metrics, unfair comparisons, and short-term outcome pressure. As a consequence, good performance on standard benchmarks does not guarantee success in real-world scenarios. To address these problems, we present Touchstone, a large-scale collaborative segmentation benchmark of 9 types of abdominal organs. This benchmark is based on 5,195 training CT scans from 76 hospitals around the world and 5,903 testing CT scans from 11 additional hospitals. This diverse test set enhances the statistical significance of benchmark results and rigorously evaluates AI algorithms across various out-of-distribution scenarios. We invited 14 inventors of 19 AI algorithms to train their algorithms, while our team, as a third party, independently evaluated these algorithms on three test sets. In addition, we also evaluated pre-existing AI frameworks--which, differing from algorithms, are more flexible and can support different algorithms--including MONAI from NVIDIA, nnU-Net from DKFZ, and numerous other open-source frameworks. We are committed to expanding this benchmark to encourage more innovation of AI algorithms for the medical domain.
翻译:我们如何测试AI性能?这个问题看似简单,实则不然。标准基准测试通常存在诸多问题,例如测试集局限于分布内数据且规模过小、评估指标过于简化、比较方式有失公允,以及短期成果压力等。因此,在标准基准测试中表现优异并不能保证其在真实场景中取得成功。为解决这些问题,我们提出了Touchstone——一个涵盖9种腹部器官的大规模协作分割基准。该基准基于来自全球76家医院的5,195个训练CT扫描,以及来自另外11家医院的5,903个测试CT扫描。这一多样化的测试集增强了基准结果的统计显著性,并能在各种分布外场景下严格评估AI算法。我们邀请了19种AI算法的14位发明者训练其算法,同时由我们团队作为第三方,在三个测试集上独立评估这些算法。此外,我们还评估了已有的AI框架——与算法不同,这些框架更为灵活,能够支持不同算法——包括NVIDIA的MONAI、DKFZ的nnU-Net,以及众多其他开源框架。我们致力于扩展这一基准,以鼓励更多面向医学领域的AI算法创新。