Satellite imagery has played an increasingly important role in post-disaster building damage assessment. Unfortunately, current methods still rely on manual visual interpretation, which is often time-consuming and can cause very low accuracy. To address the limitations of manual interpretation, there has been a significant increase in efforts to automate the process. We present a solution that performs the two most important tasks in building damage assessment, segmentation and classification, through deep-learning models. We show our results submitted as part of the xView2 Challenge, a competition to design better models for identifying buildings and their damage level after exposure to multiple kinds of natural disasters. Our best model couples a building identification semantic segmentation convolutional neural network (CNN) to a building damage classification CNN, with a combined F1 score of 0.66, surpassing the xView2 challenge baseline F1 score of 0.28. We find that though our model was able to identify buildings with relatively high accuracy, building damage classification across various disaster types is a difficult task due to the visual similarity between different damage levels and different damage distribution between disaster types, highlighting the fact that it may be important to have a probabilistic prior estimate regarding disaster damage in order to obtain accurate predictions.
翻译:卫星影像在灾后建筑损伤评估中发挥着日益重要的作用。然而,当前方法仍依赖人工目视判读,该过程通常耗时且可能导致极低精度。为解决人工判读的局限性,自动化流程的研究力度显著增强。我们提出了一种解决方案,通过深度学习模型完成建筑损伤评估中两项最关键的任务——分割与分类。本文展示了我们作为xView2挑战赛参赛作品提交的结果,该竞赛旨在设计更优模型以识别暴露于多种自然灾害后的建筑物及其损伤等级。我们的最佳模型将建筑识别语义分割卷积神经网络与建筑损伤分类卷积神经网络相结合,综合F1得分为0.66,超越了xView2挑战赛基线F1分数0.28。研究发现,尽管模型能够以较高精度识别建筑物,但由于不同损伤等级间的视觉相似性以及不同灾害类型间损伤分布的差异性,跨灾害类型的建筑损伤分类仍具挑战性,这凸显了为获得准确预测,可能需要引入关于灾害损伤的概率先验估计。