Real-world meal images often contain multiple food items, making reliable compositional food image generation important for applications such as image-based dietary assessment, where multi-food data augmentation is needed, and recipe visualization. However, modern text-to-image diffusion models struggle to generate accurate multi-food images due to object entanglement, where adjacent foods (e.g., rice and soup) fuse together because many foods do not have clear boundaries. To address this challenge, we introduce Prompt Grafting (PG), a training-free framework that combines explicit spatial cues in text with implicit layout guidance during sampling. PG runs a two-stage process where a layout prompt first establishes distinct regions and the target prompt is grafted once layout formation stabilizes. The framework enables food entanglement control: users can specify which food items should remain separated or be intentionally mixed by editing the arrangement of layouts. Across two food datasets, our method significantly improves the presence of target objects and provides qualitative evidence of controllable separation.
翻译:现实世界中的餐食图像通常包含多种食物,这使得可靠的组合食物图像生成对于基于图像的膳食评估(需要多食物数据增强)和食谱可视化等应用至关重要。然而,现代文本到图像扩散模型在生成准确的多食物图像方面存在困难,这主要是由于物体纠缠问题,即相邻食物(例如米饭和汤)因缺乏清晰边界而融合在一起。为应对这一挑战,我们提出了提示嫁接(PG),这是一种无需训练的框架,它将文本中的显式空间线索与采样过程中的隐式布局指导相结合。PG采用两阶段流程:首先通过布局提示建立不同的区域,一旦布局形成稳定,便将目标提示嫁接上去。该框架实现了食物纠缠控制:用户可以通过编辑布局排列来指定哪些食物应保持分离或有意混合。在两个食物数据集上的实验表明,我们的方法显著提高了目标物体的存在度,并提供了可控分离的定性证据。