Recently, a novel paradigm has been proposed for reinforcement learning-based NAS agents, that revolves around the incremental improvement of a given architecture. We assess the abilities of such reinforcement learning agents to transfer between different tasks. We perform our evaluation using the Trans-NASBench-101 benchmark, and consider the efficacy of the transferred agents, as well as how quickly they can be trained. We find that pretraining an agent on one task benefits the performance of the agent in another task in all but 1 task when considering final performance. We also show that the training procedure for an agent can be shortened significantly by pretraining it on another task. Our results indicate that these effects occur regardless of the source or target task, although they are more pronounced for some tasks than for others. Our results show that transfer learning can be an effective tool in mitigating the computational cost of the initial training procedure for reinforcement learning-based NAS agents.
翻译:近期,一种围绕给定架构渐进式改进的新型范式被提出,用于基于强化学习的神经架构搜索(NAS)代理。我们评估了此类强化学习代理在不同任务间迁移的能力。我们使用Trans-NASBench-101基准进行评估,并考量迁移代理的效能及其训练速度。研究发现,除一项任务外,在考虑最终性能时,代理在一个任务上的预训练均有利于其在另一任务上的表现。我们还证明,通过在另一任务上预训练,可显著缩短代理的训练过程。结果表明,这些效应与源任务或目标任务无关,尽管某些任务中的表现更为显著。我们的研究证明,迁移学习可成为降低基于强化学习的NAS代理初始训练过程计算成本的有效工具。