Test-time augmentation (TTA) is a well-known technique employed during the testing phase of computer vision tasks. It involves aggregating multiple augmented versions of input data. Combining predictions using a simple average formulation is a common and straightforward approach after performing TTA. This paper introduces a novel framework for optimizing TTA, called BayTTA (Bayesian-based TTA), which is based on Bayesian Model Averaging (BMA). First, we generate a model list associated with different variations of the input data created through TTA. Then, we use BMA to combine model predictions weighted by their respective posterior probabilities. Such an approach allows one to take into account model uncertainty, and thus to enhance the predictive performance of the related machine learning or deep learning model. We evaluate the performance of BayTTA on various public data, including three medical image datasets comprising skin cancer, breast cancer, and chest X-ray images and two well-known gene editing datasets, CRISPOR and GUIDE-seq. Our experimental results indicate that BayTTA can be effectively integrated into state-of-the-art deep learning models used in medical image analysis as well as into some popular pre-trained CNN models such as VGG-16, MobileNetV2, DenseNet201, ResNet152V2, and InceptionRes-NetV2, leading to the enhancement in their accuracy and robustness performance.
翻译:测试时增强(TTA)是计算机视觉任务测试阶段常用的一种技术,其核心在于聚合输入数据的多个增强版本。在执行TTA后,采用简单的平均公式对预测结果进行融合是一种常见且直接的方法。本文提出了一种新颖的优化TTA框架,称为BayTTA(基于贝叶斯的TTA),该框架建立在贝叶斯模型平均(BMA)的基础上。首先,我们生成一个与通过TTA创建的不同输入数据变体相关联的模型列表。随后,我们利用BMA,根据各模型的后验概率加权融合其预测结果。这种方法能够充分考虑模型的不确定性,从而提升相关机器学习或深度学习模型的预测性能。我们在多个公共数据集上评估了BayTTA的性能,其中包括三个医学图像数据集(涵盖皮肤癌、乳腺癌和胸部X光图像)以及两个著名的基因编辑数据集CRISPOR和GUIDE-seq。实验结果表明,BayTTA能够有效集成到医学图像分析领域最先进的深度学习模型中,也可应用于一些流行的预训练CNN模型(如VGG-16、MobileNetV2、DenseNet201、ResNet152V2和InceptionResNetV2),从而显著提升其准确性与鲁棒性。