Underwater dense prediction, especially depth estimation and semantic segmentation, is crucial for gaining a comprehensive understanding of underwater scenes. Nevertheless, high-quality and large-scale underwater datasets with dense annotations remain scarce because of the complex environment and the exorbitant data collection costs. This paper proposes a unified Text-to-Image and DEnse annotation generation method (TIDE) for underwater scenes. It relies solely on text as input to simultaneously generate realistic underwater images and multiple highly consistent dense annotations. Specifically, we unify the generation of text-to-image and text-to-dense annotations within a single model. The Implicit Layout Sharing mechanism (ILS) and cross-modal interaction method called Time Adaptive Normalization (TAN) are introduced to jointly optimize the consistency between image and dense annotations. We synthesize a large-scale underwater dataset using TIDE to validate the effectiveness of our method in underwater dense prediction tasks. The results demonstrate that our method effectively improves the performance of existing underwater dense prediction models and mitigates the scarcity of underwater data with dense annotations. We hope our method can offer new perspectives on alleviating data scarcity issues in other fields. The code is available at https: //github.com/HongkLin/TIDE.
翻译:水下密集预测,特别是深度估计与语义分割,对于全面理解水下场景至关重要。然而,由于环境复杂且数据采集成本高昂,具有密集标注的高质量、大规模水下数据集仍然稀缺。本文提出了一种面向水下场景的统一文本到图像与密集标注生成方法(TIDE)。该方法仅依赖文本作为输入,即可同时生成逼真的水下图像及多种高度一致的密集标注。具体而言,我们在单一模型内统一了文本到图像与文本到密集标注的生成过程。通过引入隐式布局共享机制(ILS)及名为时间自适应归一化的跨模态交互方法(TAN),共同优化图像与密集标注之间的一致性。我们利用TIDE合成了一个大规模水下数据集,以验证该方法在水下密集预测任务中的有效性。实验结果表明,我们的方法有效提升了现有水下密集预测模型的性能,并缓解了带密集标注水下数据稀缺的问题。我们希望该方法能为缓解其他领域的数据稀缺问题提供新的思路。代码发布于 https://github.com/HongkLin/TIDE。