Neural Architecture Search (NAS) currently relies heavily on labeled data, which is both expensive and time-consuming to acquire. In this paper, we propose a novel NAS framework based on Masked Autoencoders (MAE) that eliminates the need for labeled data during the search process. By replacing the supervised learning objective with an image reconstruction task, our approach enables the robust discovery of network architectures without compromising performance and generalization ability. Additionally, we address the problem of performance collapse encountered in the widely-used Differentiable Architecture Search (DARTS) method in the unsupervised paradigm by introducing a multi-scale decoder. Through extensive experiments conducted on various search spaces and datasets, we demonstrate the effectiveness and robustness of the proposed method, providing empirical evidence of its superiority over baseline approaches.
翻译:神经架构搜索(NAS)目前严重依赖标注数据,而标注数据的获取既昂贵又耗时。本文提出一种基于掩码自编码器(MAE)的新型NAS框架,该框架在搜索过程中无需标注数据。通过将监督学习目标替换为图像重建任务,我们的方法能够在不影响性能和泛化能力的前提下稳健地发现网络架构。此外,针对无监督范式中广泛使用的可微分架构搜索(DARTS)方法出现的性能崩溃问题,我们引入多尺度解码器加以解决。通过在多种搜索空间和数据集上的广泛实验,我们验证了所提方法的有效性和稳健性,并提供了其优于基线方法的实证证据。