Continual learning (CL) is a new online learning technique over sequentially generated streaming data from different tasks, aiming to maintain a small forgetting loss on previously-learned tasks. Existing work focuses on reducing the forgetting loss under a given task sequence. However, if similar tasks continuously appear to the end time, the forgetting loss is still huge on prior distinct tasks. In practical IoT networks, an autonomous vehicle to sample data and learn different tasks can route and alter the order of task pattern at increased travelling cost. To our best knowledge, we are the first to study how to opportunistically route the testing object and alter the task sequence in CL. We formulate a new optimization problem and prove it NP-hard. We propose a polynomial-time algorithm to achieve approximation ratios of $\frac{3}{2}$ for underparameterized case and $\frac{3}{2} + r^{1-T}$ for overparameterized case, respectively, where $r:=1-\frac{n}{m}$ is a parameter of feature number $m$ and sample number $n$ and $T$ is the task number. Simulation results verify our algorithm's close-to-optimum performance.
翻译:持续学习(CL)是一种基于不同任务顺序生成的流式数据的新型在线学习技术,旨在保持对先前学习任务的较小遗忘损失。现有研究主要关注在给定任务序列下降低遗忘损失。然而,若相似任务持续出现至截止时间,先前不同任务的遗忘损失仍然巨大。在实际物联网网络中,用于采样数据和学习不同任务的自主车辆可通过增加行驶成本来规划路径并改变任务模式顺序。据我们所知,本研究首次探讨如何在持续学习中机会性地规划测试对象路径并改变任务序列。我们构建了一个新的优化问题并证明其为NP难问题。我们提出了一种多项式时间算法,分别在欠参数化情况下实现$\frac{3}{2}$的近似比,在过参数化情况下实现$\frac{3}{2} + r^{1-T}$的近似比,其中$r:=1-\frac{n}{m}$是特征数$m$与样本数$n$的参数,$T$为任务数量。仿真结果验证了我们算法的接近最优性能。