Mapping individual tree crowns is essential for tasks such as maintaining urban tree inventories and monitoring forest health, which help us understand and care for our environment. However, automatically separating the crowns from each other in aerial imagery is challenging due to factors such as the texture and partial tree crown overlaps. In this study, we present a method to train deep learning models that segment and separate individual trees from RGB and multispectral images, using pseudo-labels derived from aerial laser scanning (ALS) data. Our study shows that the ALS-derived pseudo-labels can be enhanced using a zero-shot instance segmentation model, Segment Anything Model 2 (SAM 2). Our method offers a way to obtain domain-specific training annotations for optical image-based models without any manual annotation cost, leading to segmentation models which outperform any available models which have been targeted for general domain deployment on the same task.
翻译:绘制单株树冠对于维护城市树木清单和监测森林健康等任务至关重要,这些任务有助于我们理解和保护环境。然而,由于纹理和树冠部分重叠等因素,在航空影像中自动分离树冠具有挑战性。在本研究中,我们提出了一种训练深度学习模型的方法,该模型使用源自航空激光扫描(ALS)数据的伪标签,从RGB和多光谱图像中分割并分离单株树木。我们的研究表明,可以使用零样本实例分割模型Segment Anything Model 2(SAM 2)来增强ALS衍生的伪标签。我们的方法提供了一种无需任何手动标注成本即可为基于光学图像的模型获取特定领域训练标注的途径,从而产生的分割模型在相同任务上优于任何针对通用领域部署的现有模型。