Monocular depth estimation (MDE) is essential for numerous applications yet is impeded by the substantial computational demands of accurate deep learning models. To mitigate this, we introduce a novel Teacher-Independent Explainable Knowledge Distillation (TIE-KD) framework that streamlines the knowledge transfer from complex teacher models to compact student networks, eliminating the need for architectural similarity. The cornerstone of TIE-KD is the Depth Probability Map (DPM), an explainable feature map that interprets the teacher's output, enabling feature-based knowledge distillation solely from the teacher's response. This approach allows for efficient student learning, leveraging the strengths of feature-based distillation. Extensive evaluation of the KITTI dataset indicates that TIE-KD not only outperforms conventional response-based KD methods but also demonstrates consistent efficacy across diverse teacher and student architectures. The robustness and adaptability of TIE-KD underscore its potential for applications requiring efficient and interpretable models, affirming its practicality for real-world deployment.
翻译:单目深度估计(MDE)对众多应用至关重要,但高精度深度学习模型巨大的计算需求限制了其发展。为此,我们提出了一种新颖的教师无关可解释知识蒸馏(TIE-KD)框架,该框架简化了从复杂教师模型到紧凑学生网络的知识迁移过程,且无需模型架构相似性。TIE-KD的核心是深度概率图(DPM),这是一种可解释的特征图,用于解释教师模型的输出,从而仅基于教师响应实现基于特征的知识蒸馏。该方法使学生高效学习成为可能,充分发挥了基于特征蒸馏的优势。在KITTI数据集上的广泛评估表明,TIE-KD不仅优于传统基于响应的知识蒸馏方法,而且在多种不同的教师与学生架构中均展现出持续有效性。TIE-KD的鲁棒性与适应性凸显了其在需要高效且可解释模型的应用中的潜力,为实际部署的可行性提供了有力证据。