Domain incremental learning (DIL) has been discussed in previous studies on deep neural network models for classification. In DIL, we assume that samples on new domains are observed over time. The models must classify inputs on all domains. In practice, however, we may encounter a situation where we need to perform DIL under the constraint that the samples on the new domain are observed only infrequently. Therefore, in this study, we consider the extreme case where we have only one sample from the new domain, which we call one-shot DIL. We first empirically show that existing DIL methods do not work well in one-shot DIL. We have analyzed the reason for this failure through various investigations. According to our analysis, we clarify that the difficulty of one-shot DIL is caused by the statistics in the batch normalization layers. Therefore, we propose a technique regarding these statistics and demonstrate the effectiveness of our technique through experiments on open datasets.
翻译:领域增量学习(DIL)已在先前关于深度神经网络分类模型的研究中得到探讨。在DIL中,我们假设新领域的样本会随时间逐步观测到,模型必须对所有领域的输入进行分类。然而在实际应用中,我们可能会遇到需要在仅能稀疏观测到新领域样本的约束条件下执行DIL的情况。为此,本研究考虑一种极端情形——仅从新领域获取单个样本,我们将其称为一次性领域增量学习(one-shot DIL)。首先通过实验证明现有DIL方法在一次性DIL场景下效果不佳,并通过多项研究分析其失效原因。基于分析结果,我们阐明一次性DIL的困难源于批归一化层中的统计量。因此,我们针对这些统计量提出一种技术方案,并通过在公开数据集上的实验验证了该技术的有效性。