Anomaly synthesis is one of the effective methods to augment abnormal samples for training. However, current anomaly synthesis methods predominantly rely on texture information as input, which limits the fidelity of synthesized abnormal samples. Because texture information is insufficient to correctly depict the pattern of anomalies, especially for logical anomalies. To surmount this obstacle, we present the AnomalyXFusion framework, designed to harness multi-modality information to enhance the quality of synthesized abnormal samples. The AnomalyXFusion framework comprises two distinct yet synergistic modules: the Multi-modal In-Fusion (MIF) module and the Dynamic Dif-Fusion (DDF) module. The MIF module refines modality alignment by aggregating and integrating various modality features into a unified embedding space, termed X-embedding, which includes image, text, and mask features. Concurrently, the DDF module facilitates controlled generation through an adaptive adjustment of X-embedding conditioned on the diffusion steps. In addition, to reveal the multi-modality representational power of AnomalyXFusion, we propose a new dataset, called MVTec Caption. More precisely, MVTec Caption extends 2.2k accurate image-mask-text annotations for the MVTec AD and LOCO datasets. Comprehensive evaluations demonstrate the effectiveness of AnomalyXFusion, especially regarding the fidelity and diversity for logical anomalies. Project page: http:github.com/hujiecpp/MVTec-Caption
翻译:异常合成是扩充训练中异常样本的有效方法之一。然而,当前异常合成方法主要依赖于纹理信息作为输入,这限制了合成异常样本的逼真度。因为纹理信息不足以准确描绘异常模式,尤其是逻辑异常。为克服这一障碍,我们提出了AnomalyXFusion框架,旨在利用多模态信息提升合成异常样本的质量。该框架包含两个独立且协同的模块:多模态注入模块(MIF)和动态扩散模块(DDF)。MIF模块通过将多种模态特征聚合与整合至统一的嵌入空间(称为X-embedding,涵盖图像、文本和掩码特征)来优化模态对齐。同时,DDF模块通过基于扩散步骤自适应调整X-embedding条件来实现受控生成。此外,为揭示AnomalyXFusion的多模态表征能力,我们构建了新数据集MVTec Caption。具体而言,MVTec Caption为MVTec AD和LOCO数据集扩充了2.2k条精确的图像-掩码-文本标注。全面评估验证了AnomalyXFusion的有效性,尤其在逻辑异常的逼真度和多样性方面表现突出。项目主页:http:github.com/hujiecpp/MVTec-Caption