Dangerous surroundings and difficult-to-reach landscapes introduce significant complications for adequate disaster management and recuperation. These problems can be solved by engaging unmanned aerial vehicles (UAVs) provided with embedded platforms and optical sensors. In this work, we focus on enabling onboard aerial image processing to ensure proper and real-time disaster detection. Such a setting usually causes challenges due to the limited hardware resources of UAVs. However, privacy, connectivity, and latency issues can be avoided. We suggest a UAV-assisted edge framework for disaster detection, leveraging our proposed model optimized for onboard real-time aerial image classification. The optimization of the model is achieved using post-training quantization techniques. To address the limited number of disaster cases in existing benchmark datasets and therefore ensure real-world adoption of our model, we construct a novel dataset, DisasterEye, featuring disaster scenes captured by UAVs and individuals on-site. Experimental results reveal the efficacy of our model, reaching high accuracy with lowered inference latency and memory use on both traditional machines and resource-limited devices. This shows that the scalability and adaptability of our method make it a powerful solution for real-time disaster management on resource-constrained UAV platforms.
翻译:危险环境与难以抵达的地形为有效的灾害管理与恢复工作带来了重大挑战。这些问题可通过配备嵌入式平台与光学传感器的无人机(UAV)来解决。本研究致力于实现机载航拍图像处理,以确保准确、实时的灾害检测。由于无人机硬件资源有限,此类部署通常面临挑战,但可规避隐私、连接性与延迟问题。我们提出一种无人机辅助的边缘计算灾害检测框架,利用我们提出的专为机载实时航拍图像分类优化的模型。该模型通过训练后量化技术实现优化。针对现有基准数据集中灾害案例数量有限的问题,为确保模型在实际场景中的适用性,我们构建了一个包含无人机及现场人员拍摄的灾害场景的新型数据集DisasterEye。实验结果表明,我们的模型在传统机器与资源受限设备上均实现了高精度、低推理延迟与低内存占用,证明该方法在资源受限的无人机平台上具有卓越的可扩展性与适应性,为实时灾害管理提供了高效解决方案。