Continual learning (CL) is the sub-field of machine learning concerned with accumulating knowledge in dynamic environments. So far, CL research has mainly focused on incremental classification tasks, where models learn to classify new categories while retaining knowledge of previously learned ones. Here, we argue that maintaining such a focus limits both theoretical development and practical applicability of CL methods. Through a detailed analysis of concrete examples - including multi-target classification, robotics with constrained output spaces, learning in continuous task domains, and higher-level concept memorization - we demonstrate how current CL approaches often fail when applied beyond standard classification. We identify three fundamental challenges: (C1) the nature of continuity in learning problems, (C2) the choice of appropriate spaces and metrics for measuring similarity, and (C3) the role of learning objectives beyond classification. For each challenge, we provide specific recommendations to help move the field forward, including formalizing temporal dynamics through distribution processes, developing principled approaches for continuous task spaces, and incorporating density estimation and generative objectives. In so doing, this position paper aims to broaden the scope of CL research while strengthening its theoretical foundations, making it more applicable to real-world problems.
翻译:持续学习(CL)是机器学习中关注动态环境下知识积累的子领域。迄今为止,持续学习研究主要聚焦于增量分类任务,即模型在学习对新类别进行分类的同时保持对已学类别的知识。本文认为,维持这种研究焦点既限制了持续学习方法的理论发展,也制约了其实际适用性。通过对具体案例的详细分析——包括多目标分类、受限输出空间的机器人学、连续任务域中的学习以及高层概念记忆——我们论证了当前持续学习方法在标准分类任务之外的应用中经常失效的原因。我们识别出三个根本性挑战:(C1)学习问题中连续性的本质,(C2)度量相似性的空间与指标选择,(C3)超越分类的学习目标作用。针对每个挑战,我们提出具体建议以推动领域发展,包括通过分布过程形式化时序动态、开发连续任务空间的原理性方法、以及引入密度估计与生成式目标。通过这一立场论文,我们旨在拓宽持续学习的研究范畴,同时加强其理论基础,使其更适用于现实世界问题。