We tackle the problem of quantifying the number of objects by a generative text-to-image model. Rather than retraining such a model for each new image domain of interest, which leads to high computational costs and limited scalability, we are the first to consider this problem from a domain-agnostic perspective. We propose QUOTA, an optimization framework for text-to-image models that enables effective object quantification across unseen domains without retraining. It leverages a dual-loop meta-learning strategy to optimize a domain-invariant prompt. Further, by integrating prompt learning with learnable counting and domain tokens, our method captures stylistic variations and maintains accuracy, even for object classes not encountered during training. For evaluation, we adopt a new benchmark specifically designed for object quantification in domain generalization, enabling rigorous assessment of object quantification accuracy and adaptability across unseen domains in text-to-image generation. Extensive experiments demonstrate that QUOTA outperforms conventional models in both object quantification accuracy and semantic consistency, setting a new benchmark for efficient and scalable text-to-image generation for any domain.
翻译:我们致力于解决通过生成式文本到图像模型量化物体数量的问题。针对每个感兴趣的新图像领域重新训练此类模型会导致高昂的计算成本与有限的可扩展性,为此,我们首次从领域无关的视角探讨该问题。我们提出QUOTA——一种面向文本到图像模型的优化框架,该框架能够在无需重新训练的情况下,有效实现跨未见领域的物体量化。该框架采用双循环元学习策略来优化领域无关提示。进一步地,通过将提示学习与可学习的计数及领域标记相结合,我们的方法能够捕捉风格变化并保持准确性,即使对于训练中未接触过的物体类别也依然有效。为进行评估,我们采用了一个专为领域泛化中物体量化设计的新基准,从而能够对文本到图像生成中跨未见领域的物体量化准确性与适应性进行严格评估。大量实验表明,QUOTA在物体量化准确性与语义一致性方面均优于传统模型,为面向任意领域的高效可扩展文本到图像生成设立了新基准。