Object detection and segmentation are widely employed in computer vision applications, yet conventional models like YOLO series, while efficient and accurate, are limited by predefined categories, hindering adaptability in open scenarios. Recent open-set methods leverage text prompts, visual cues, or prompt-free paradigm to overcome this, but often compromise between performance and efficiency due to high computational demands or deployment complexity. In this work, we introduce YOLOE, which integrates detection and segmentation across diverse open prompt mechanisms within a single highly efficient model, achieving real-time seeing anything. For text prompts, we propose Re-parameterizable Region-Text Alignment (RepRTA) strategy. It refines pretrained textual embeddings via a re-parameterizable lightweight auxiliary network and enhances visual-textual alignment with zero inference and transferring overhead. For visual prompts, we present Semantic-Activated Visual Prompt Encoder (SAVPE). It employs decoupled semantic and activation branches to bring improved visual embedding and accuracy with minimal complexity. For prompt-free scenario, we introduce Lazy Region-Prompt Contrast (LRPC) strategy. It utilizes a built-in large vocabulary and specialized embedding to identify all objects, avoiding costly language model dependency. Extensive experiments show YOLOE's exceptional zero-shot performance and transferability with high inference efficiency and low training cost. Notably, on LVIS, with 3$\times$ less training cost and 1.4$\times$ inference speedup, YOLOE-v8-S surpasses YOLO-Worldv2-S by 3.5 AP. When transferring to COCO, YOLOE-v8-L achieves 0.6 AP$^b$ and 0.4 AP$^m$ gains over closed-set YOLOv8-L with nearly 4$\times$ less training time. Code and models are available at https://github.com/THU-MIG/yoloe.
翻译:目标检测与分割在计算机视觉应用中广泛使用,然而YOLO系列等传统模型虽然高效准确,但受限于预定义类别,在开放场景中的适应性不足。近期开放集方法利用文本提示、视觉线索或无提示范式来克服这一局限,但由于高计算需求或部署复杂性,往往在性能与效率之间做出妥协。本文提出YOLOE模型,将多种开放提示机制下的检测与分割功能集成于单一高效模型中,实现了实时感知万物。针对文本提示,我们提出可重参数化的区域-文本对齐策略。该策略通过可重参数化的轻量化辅助网络优化预训练文本嵌入,并以零推理与迁移开销增强视觉-文本对齐。针对视觉提示,我们提出语义激活的视觉提示编码器。该编码器采用解耦的语义分支与激活分支,以最小复杂度实现视觉嵌入与精度的同步提升。针对无提示场景,我们提出惰性区域-提示对比策略。该策略利用内置大词表与专用嵌入识别所有对象,避免了对昂贵语言模型的依赖。大量实验表明,YOLOE在保持高推理效率与低训练成本的同时,展现出卓越的零样本性能与可迁移性。值得注意的是,在LVIS数据集上,YOLOE-v8-S以降低3倍的训练成本与1.4倍的推理加速,超越YOLO-Worldv2-S达3.5 AP。迁移至COCO数据集时,YOLOE-v8-L相比封闭集模型YOLOv8-L在减少近4倍训练时间的情况下,实现了0.6 AP$^b$与0.4 AP$^m$的性能提升。代码与模型已开源:https://github.com/THU-MIG/yoloe。