Object counting is pivotal for understanding the composition of scenes. Previously, this task was dominated by class-specific methods, which have gradually evolved into more adaptable class-agnostic strategies. However, these strategies come with their own set of limitations, such as the need for manual exemplar input and multiple passes for multiple categories, resulting in significant inefficiencies. This paper introduces a more practical approach enabling simultaneous counting of multiple object categories using an open-vocabulary framework. Our solution, OmniCount, stands out by using semantic and geometric insights (priors) from pre-trained models to count multiple categories of objects as specified by users, all without additional training. OmniCount distinguishes itself by generating precise object masks and leveraging varied interactive prompts via the Segment Anything Model for efficient counting. To evaluate OmniCount, we created the OmniCount-191 benchmark, a first-of-its-kind dataset with multi-label object counts, including points, bounding boxes, and VQA annotations. Our comprehensive evaluation in OmniCount-191, alongside other leading benchmarks, demonstrates OmniCount's exceptional performance, significantly outpacing existing solutions. The project webpage is available at https://mondalanindya.github.io/OmniCount.
翻译:目标计数对于理解场景构成至关重要。以往该任务主要采用类别特定的方法,这些方法已逐渐演变为更具适应性的类别无关策略。然而,这些策略存在自身局限性,例如需要手动输入样本示例以及对多类别需进行多次处理,导致显著的低效性。本文提出一种更实用的方法,通过开放词汇框架实现多目标类别的同步计数。我们的解决方案OmniCount通过利用预训练模型的语义与几何洞察(先验),在无需额外训练的情况下,即可按用户指定对多类别目标进行计数。OmniCount的突出优势在于能生成精确的目标掩码,并借助Segment Anything模型通过多样化交互提示实现高效计数。为评估OmniCount,我们创建了OmniCount-191基准数据集——首个包含多标签目标计数的数据集,涵盖点标注、边界框及视觉问答标注。我们在OmniCount-191及其他主流基准上的综合评估表明,OmniCount具有卓越性能,显著超越现有解决方案。项目网页详见https://mondalanindya.github.io/OmniCount。