Computer vision can accelerate ecological research and conservation monitoring, yet adoption in ecology lags in part because of a lack of trust in black-box neural-network-based models. We seek to address this challenge by applying post-hoc explanations to provide evidence for predictions and document limitations that are important to field deployment. Using aerial imagery from Glacier Bay National Park, we train a Faster R-CNN to detect pinnipeds (harbor seals) and generate explanations via gradient-based class activation mapping (HiResCAM, LayerCAM), local interpretable model-agnostic explanations (LIME), and perturbation-based explanations. We assess explanations along three axes relevant to field use: (i) localization fidelity: whether high-attribution regions coincide with the animal rather than background context; (ii) faithfulness: whether deletion/insertion tests produce changes in detector confidence; and (iii) diagnostic utility: whether explanations reveal systematic failure modes. Explanations concentrate on seal torsos and contours rather than surrounding ice/rock, and removal of the seals reduces detection confidence, providing model-evidence for true positives. The analysis also uncovers recurrent error sources, including confusion between seals and black ice and rocks. We translate these findings into actionable next steps for model development, including more targeted data curation and augmentation. By pairing object detection with post-hoc explainability, we can move beyond "black-box" predictions toward auditable, decision-supporting tools for conservation monitoring.
翻译:计算机视觉能够加速生态学研究和保护监测,然而其在生态学领域的应用仍显滞后,部分原因在于对基于黑盒神经网络模型的信任缺失。我们试图通过应用事后解释方法来解决这一挑战,为预测提供证据并记录对实地部署至关重要的局限性。利用冰川湾国家公园的航拍图像,我们训练了一个Faster R-CNN模型来检测鳍足类动物(斑海豹),并通过基于梯度的类激活映射(HiResCAM、LayerCAM)、局部可解释模型无关解释(LIME)以及基于扰动的解释方法生成解释。我们从三个与实地应用相关的维度评估这些解释:(一)定位保真度:高归因区域是否与动物本身而非背景环境重合;(二)忠实度:删除/插入测试是否会导致检测器置信度的变化;(三)诊断效用:解释是否揭示了系统性的失效模式。解释结果集中于海豹躯干和轮廓而非周围的冰层/岩石,移除海豹区域会降低检测置信度,从而为真阳性预测提供了模型证据。分析还揭示了重复出现的误差来源,包括海豹与黑冰及岩石之间的混淆。我们将这些发现转化为模型开发的可操作后续步骤,包括更具针对性的数据筛选与增强。通过将目标检测与事后可解释性相结合,我们能够超越“黑盒”预测,迈向可用于审计、支持保护监测决策的工具。