In precision agriculture, vision models often struggle with new, unseen fields where crops and weeds have been influenced by external factors, resulting in compositions and appearances that differ from the learned distribution. This paper aims to adapt to specific fields at low cost using Unsupervised Domain Adaptation (UDA). We explore a novel domain shift from a diverse, large pool of internet-sourced data to a small set of data collected by a robot at specific locations, minimizing the need for extensive on-field data collection. Additionally, we introduce a novel module -- the Multi-level Attention-based Adversarial Discriminator (MAAD) -- which can be integrated at the feature extractor level of any detection model. In this study, we incorporate MAAD with CenterNet to simultaneously detect leaf, stem, and vein instances. Our results show significant performance improvements in the unlabeled target domain compared to baseline models, with a 7.5% increase in object detection accuracy and a 5.1% improvement in keypoint detection.
翻译:在精准农业中,视觉模型常因作物与杂草受外部因素影响,在新出现的、未经训练的农田中表现不佳,其构成与外观往往偏离已学习的分布。本文旨在通过无监督域自适应(UDA)方法,以低成本实现对特定农田的适应。我们探索了一种新颖的域迁移范式:从多样化的、大规模网络来源数据迁移至由机器人在特定地点采集的小规模数据集,从而最大程度减少对大规模实地数据收集的需求。此外,我们提出了一种新型模块——基于多层级注意力的对抗判别器(MAAD),该模块可集成至任意检测模型的特征提取器层面。在本研究中,我们将MAAD与CenterNet结合,实现了对叶片、茎秆和叶脉实例的同步检测。实验结果表明,与基线模型相比,该方法在未标记目标域中取得了显著的性能提升:目标检测精度提高了7.5%,关键点检测精度提升了5.1%。