Continual learning and few-shot learning are important frontiers in progress toward broader Machine Learning (ML) capabilities. Recently, there has been intense interest in combining both. One of the first examples to do so was the Continual few-shot Learning (CFSL) framework of Antoniou et al. arXiv:2004.11967. In this study, we extend CFSL in two ways that capture a broader range of challenges, important for intelligent agent behaviour in real-world conditions. First, we increased the number of classes by an order of magnitude, making the results more comparable to standard continual learning experiments. Second, we introduced an 'instance test' which requires recognition of specific instances of classes -- a capability of animal cognition that is usually neglected in ML. For an initial exploration of ML model performance under these conditions, we selected representative baseline models from the original CFSL work and added a model variant with replay. As expected, learning more classes is more difficult than the original CFSL experiments, and interestingly, the way in which image instances and classes are presented affects classification performance. Surprisingly, accuracy in the baseline instance test is comparable to other classification tasks, but poor given significant occlusion and noise. The use of replay for consolidation substantially improves performance for both types of tasks, but particularly for the instance test.
翻译:持续学习与小样本学习是推动机器学习能力向更广阔领域发展的重要前沿方向。近期,将两者结合的研究引起了广泛关注。Antoniou等人提出的持续小样本学习框架(CFSL,arXiv:2004.11967)是该领域的早期代表性工作。本研究从两个维度拓展了CFSL框架,以涵盖更广泛的挑战,这对现实环境下智能体行为具有重要意义。首先,我们将类别数量增加了一个数量级,使实验结果更接近标准持续学习研究的可比范围。其次,我们引入了"实例测试",要求模型能够识别类别的特定实例——这是动物认知中常见但机器学习领域通常忽视的能力。为初步探索机器学习模型在此类条件下的表现,我们选取了原始CFSL工作中的代表性基线模型,并增加了采用回放机制的模型变体。实验表明,学习更多类别确实比原始CFSL实验更具挑战性,有趣的是,图像实例与类别的呈现方式会影响分类性能。令人惊讶的是,基线模型在实例测试中的准确率与其他分类任务相当,但在存在显著遮挡和噪声的情况下表现较差。采用回放机制进行巩固训练能显著提升两类任务的性能,对实例测试的改善尤为明显。