The Visual Domain Adaptation (VisDA) 2021 Challenge calls for unsupervised domain adaptation (UDA) methods that can deal with both input distribution shift and label set variance between the source and target domains. In this report, we introduce a universal domain adaptation (UniDA) method by aggregating several popular feature extraction and domain adaptation schemes. First, we utilize VOLO, a Transformer-based architecture with state-of-the-art performance in several visual tasks, as the backbone to extract effective feature representations. Second, we modify the open-set classifier of OVANet to recognize the unknown class with competitive accuracy and robustness. As shown in the leaderboard, our proposed UniDA method ranks the 3rd place with 48.49% ACC and 70.8% AUROC in the VisDA 2021 Challenge.
翻译:VisDA 2021挑战赛要求参赛的无监督领域自适应方法能够同时处理源域与目标域之间的输入分布偏移和标签集差异。在本报告中,我们通过整合多种流行的特征提取与领域自适应方案,提出了一种通用领域自适应方法。首先,我们采用VOLO——一种在多项视觉任务中具有最先进性能的基于Transformer的架构——作为骨干网络以提取有效的特征表示。其次,我们改进OVANet的开集分类器,使其能以较高的准确率和鲁棒性识别未知类别。如排行榜所示,我们提出的通用领域自适应方法在VisDA 2021挑战赛中以48.49%的准确率和70.8%的AUROC位列第三。