Diffusion models revolutionize image generation by leveraging natural language to guide the creation of multimedia content. Despite significant advancements in such generative models, challenges persist in depicting detailed human-object interactions, especially regarding pose and object placement accuracy. We introduce a training-free method named Reasoning and Correcting Diffusion (ReCorD) to address these challenges. Our model couples Latent Diffusion Models with Visual Language Models to refine the generation process, ensuring precise depictions of HOIs. We propose an interaction-aware reasoning module to improve the interpretation of the interaction, along with an interaction correcting module to refine the output image for more precise HOI generation delicately. Through a meticulous process of pose selection and object positioning, ReCorD achieves superior fidelity in generated images while efficiently reducing computational requirements. We conduct comprehensive experiments on three benchmarks to demonstrate the significant progress in solving text-to-image generation tasks, showcasing ReCorD's ability to render complex interactions accurately by outperforming existing methods in HOI classification score, as well as FID and Verb CLIP-Score. Project website is available at https://alberthkyhky.github.io/ReCorD/ .
翻译:扩散模型通过利用自然语言指导多媒体内容的生成,彻底改变了图像生成领域。尽管此类生成模型取得了显著进展,但在描绘细致的人物-物体交互(HOI)方面仍存在挑战,尤其是在姿态和物体位置准确性方面。我们提出了一种名为推理与校正扩散(ReCorD)的无训练方法来解决这些挑战。我们的模型将潜在扩散模型与视觉语言模型相结合,以优化生成过程,确保对HOI的精确描绘。我们提出了一个交互感知推理模块来改善对交互关系的理解,同时引入一个交互校正模块来精细调整输出图像,以实现更精确的HOI生成。通过细致的姿态选择和物体定位过程,ReCorD在生成图像中实现了卓越的保真度,同时有效降低了计算需求。我们在三个基准数据集上进行了全面的实验,证明了该方法在解决文本到图像生成任务方面取得的显著进展。实验结果表明,ReCorD在HOI分类分数、FID和动词CLIP分数上均优于现有方法,能够准确呈现复杂的交互关系。项目网站地址为 https://alberthkyhky.github.io/ReCorD/ 。