Recent subject-driven image customization excels in fidelity, yet fine-grained instance-level spatial control remains an elusive challenge, hindering real-world applications. This limitation stems from two factors: a scarcity of scalable, position-annotated datasets, and the entanglement of identity and layout by global attention mechanisms. To this end, we introduce PositionIC, a unified framework for high-fidelity, spatially controllable multi-subject customization. First, we present BMPDS, the first automatic data-synthesis pipeline for position-annotated multi-subject datasets, effectively providing crucial spatial supervision. Second, we design a lightweight, layout-aware diffusion framework that integrates a novel visibility-aware attention mechanism. This mechanism explicitly models spatial relationships via an NeRF-inspired volumetric weight regulation to effectively decouple instance-level spatial embeddings from semantic identity features, enabling precise, occlusion-aware placement of multiple subjects. Extensive experiments demonstrate PositionIC achieves state-of-the-art performance on public benchmarks, setting new records for spatial precision and identity consistency. Our work represents a significant step towards truly controllable, high-fidelity image customization in multi-entity scenarios. Code and data: https://github.com/MeiGen-AI/PositionIC.
翻译:近期主题驱动的图像定制技术在保真度方面表现出色,但细粒度的实例级空间控制仍是一个难以解决的挑战,阻碍了实际应用。这一局限性源于两个因素:缺乏可扩展的、带有位置标注的数据集,以及全局注意力机制导致身份与布局的纠缠。为此,我们提出了PositionIC,一个用于高保真、空间可控多主体定制的统一框架。首先,我们提出了BMPDS,这是首个用于位置标注多主体数据集的自动数据合成流程,有效提供了关键的空间监督。其次,我们设计了一个轻量级、布局感知的扩散框架,该框架集成了新颖的可见性感知注意力机制。该机制通过一种受NeRF启发的体积权重调节来显式建模空间关系,从而有效地将实例级空间嵌入与语义身份特征解耦,实现了对多个主体的精确、遮挡感知的放置。大量实验表明,PositionIC在公共基准测试中实现了最先进的性能,在空间精度和身份一致性方面创造了新的记录。我们的工作代表了在多实体场景中实现真正可控、高保真图像定制的重要一步。代码与数据:https://github.com/MeiGen-AI/PositionIC。