When neural networks are confronted with unfamiliar data that deviate from their training set, this signifies a domain shift. While these networks output predictions on their inputs, they typically fail to account for their level of familiarity with these novel observations. This challenge becomes even more pronounced in resource-constrained settings, such as embedded systems or edge devices. To address such challenges, we aim to recalibrate a neural network's decision boundaries in relation to its cognizance of the data it observes, introducing an approach we coin as certainty distillation. While prevailing works navigate unsupervised domain adaptation (UDA) with the goal of curtailing model entropy, they unintentionally birth models that grapple with calibration inaccuracies - a dilemma we term the over-certainty phenomenon. In this paper, we probe the drawbacks of this traditional learning model. As a solution to the issue, we propose a UDA algorithm that not only augments accuracy but also assures model calibration, all while maintaining suitability for environments with limited computational resources.
翻译:当神经网络面对偏离其训练集的陌生数据时,这标志着域偏移的发生。尽管这些网络能对输入数据产生预测,但它们通常无法评估自身对这些新观测的熟悉程度。这一挑战在资源受限的环境(如嵌入式系统或边缘设备)中尤为突出。为解决此类问题,我们旨在重新校准神经网络决策边界与其对观测数据认知程度之间的关系,提出了一种称为确定性蒸馏的方法。当前主流研究通过降低模型熵值来实现无监督域适应(UDA),却无意中导致了模型校准失准的困境——我们称之为过度确定现象。本文深入探讨了这种传统学习模式的缺陷,并提出了一种新型UDA算法。该算法不仅能提升模型精度,还能确保模型校准的可靠性,同时保持对有限计算资源环境的适应性。