Deep neural networks (DNNs) can deteriorate in accuracy when deployment data differs from training data. While performing online training at all timesteps can improve accuracy, it is computationally expensive. We propose DecTrain, a new algorithm that decides when to train a monocular depth DNN online using self-supervision with low overhead. To make the decision at each timestep, DecTrain compares the cost of training with the predicted accuracy gain. We evaluate DecTrain on out-of-distribution data, and find DecTrain maintains accuracy compared to online training at all timesteps, while training only 44% of the time on average. We also compare the recovery of a low inference cost DNN using DecTrain and a more generalizable high inference cost DNN on various sequences. DecTrain recovers the majority (97%) of the accuracy gain of online training at all timesteps while reducing computation compared to the high inference cost DNN which recovers only 66%. With an even smaller DNN, we achieve 89% recovery while reducing computation by 56%. DecTrain enables low-cost online training for a smaller DNN to have competitive accuracy with a larger, more generalizable DNN at a lower overall computational cost.
翻译:当部署数据与训练数据存在差异时,深度神经网络(DNNs)的准确性可能会下降。虽然在所有时间步都进行在线训练可以提高准确性,但其计算成本高昂。我们提出了一种新算法DecTrain,它能够以较低的开销,利用自监督来决定何时对单目深度DNN进行在线训练。为了在每个时间步做出决策,DecTrain会比较训练成本与预测的准确性增益。我们在分布外数据上评估了DecTrain,发现与在所有时间步进行在线训练相比,DecTrain能保持相当的准确性,而平均仅需在44%的时间进行训练。我们还比较了使用DecTrain的低推理成本DNN与更具泛化能力的高推理成本DNN在不同序列上的恢复效果。DecTrain恢复了在所有时间步进行在线训练所获准确性增益的大部分(97%),同时相比仅恢复66%增益的高推理成本DNN,其计算量更少。使用一个更小的DNN,我们实现了89%的恢复率,同时将计算量减少了56%。DecTrain使得较小DNN的低成本在线训练能够以更低的总体计算成本,达到与更大、更具泛化能力的DNN相竞争的准确性。