In this work, we apply state-of-the-art self-supervised learning techniques on a large dataset of seafloor imagery, \textit{BenthicNet}, and study their performance for a complex hierarchical multi-label (HML) classification downstream task. In particular, we demonstrate the capacity to conduct HML training in scenarios where there exist multiple levels of missing annotation information, an important scenario for handling heterogeneous real-world data collected by multiple research groups with differing data collection protocols. We find that, when using smaller one-hot image label datasets typical of local or regional scale benthic science projects, models pre-trained with self-supervision on a larger collection of in-domain benthic data outperform models pre-trained on ImageNet. In the HML setting, we find the model can attain a deeper and more precise classification if it is pre-trained with self-supervision on in-domain data. We hope this work can establish a benchmark for future models in the field of automated underwater image annotation tasks and can guide work in other domains with hierarchical annotations of mixed resolution.
翻译:本研究将最先进的自监督学习技术应用于大规模海底图像数据集\textit{BenthicNet},并评估其在复杂分层多标签分类下游任务中的性能。特别地,我们展示了在存在多级标注信息缺失的场景下进行分层多标签训练的能力,这对于处理由不同研究团队按照差异化数据采集协议收集的异构现实世界数据具有重要意义。研究发现,当使用典型局域或区域尺度底栖科学项目的小规模独热编码图像标签数据集时,在领域内大规模底栖数据上通过自监督预训练的模型,其性能优于在ImageNet上预训练的模型。在分层多标签分类场景中,模型若经过领域内数据的自监督预训练,能够实现更深层次且更精确的分类。我们期望这项工作能为水下图像自动标注领域的未来模型建立基准,并对其他具有混合分辨率分层标注的领域研究提供指导。