Neural Architecture Search (NAS) relies heavily on labeled data, which is labor-intensive and time-consuming to obtain. In this paper, we propose a novel NAS method based on an unsupervised paradigm, specifically Masked Autoencoders (MAE), thereby eliminating the need for labeled data. By replacing the supervised learning objective with an image reconstruction task, our approach enables the efficient discovery of network architectures without compromising performance and generalization ability. Additionally, we address the problem of performance collapse encountered in the widely-used Differentiable Architecture Search (DARTS) in the unsupervised setting by designing a hierarchical decoder. Extensive experiments across various datasets demonstrate the effectiveness and robustness of our method, offering empirical evidence of its superiority over the counterparts.
翻译:神经架构搜索(NAS)严重依赖标注数据,而获取此类数据既耗时又费力。本文提出了一种基于无监督范式的新型NAS方法,具体而言,采用掩码自编码器(MAE),从而无需标注数据。通过将监督学习目标替换为图像重建任务,我们的方法能够在保持性能和泛化能力的同时,高效地发现网络架构。此外,针对无监督场景下广泛使用的可微分架构搜索(DARTS)中存在的性能崩溃问题,我们设计了一种分层解码器加以解决。在多个数据集上的大量实验证明了本方法的有效性和鲁棒性,并提供了优于同类方法的实证依据。