Adaptive networks today rely on overparameterized fixed topologies that cannot break through the statistical conflicts they encounter in the data they are exposed to, and are prone to "catastrophic forgetting" as the network attempts to reuse the existing structures to learn new task. We propose a structural adaptation method, DIRAD, that can complexify as needed and in a directed manner without being limited by statistical conflicts within a dataset. We then extend this method and present the PREVAL framework, designed to prevent "catastrophic forgetting" in continual learning by detection of new data and assigning encountered data to suitable models adapted to process them, without needing task labels anywhere in the workflow. We show the reliability of the DIRAD in growing a network with high performance and orders-of-magnitude simpler than fixed topology networks; and demonstrate the proof-of-concept operation of PREVAL, in which continual adaptation to new tasks is observed while being able to detect and discern previously-encountered tasks.
翻译:当前的自适应网络依赖于过度参数化的固定拓扑结构,这些结构无法突破其在所接触数据中遇到的统计冲突,并且当网络尝试复用现有结构来学习新任务时容易发生“灾难性遗忘”。我们提出了一种结构适应方法DIRAD,该方法能够按需并以定向方式进行复杂化,而不受数据集中统计冲突的限制。随后,我们扩展了该方法并提出了PREVAL框架,该框架旨在通过检测新数据并将遇到的数据分配给适合处理它们的适应模型,来防止持续学习中的“灾难性遗忘”,且整个工作流程中无需任何任务标签。我们证明了DIRAD在增长网络方面的可靠性,其性能优异且比固定拓扑网络简单数个数量级;并展示了PREVAL的概念验证操作,在该操作中观察到了对新任务的持续适应,同时能够检测并识别先前遇到的任务。