Affective Image Manipulation (AIM) seeks to modify user-provided images to evoke specific emotional responses. This task is inherently complex due to its twofold objective: significantly evoking the intended emotion, while preserving the original image composition. Existing AIM methods primarily adjust color and style, often failing to elicit precise and profound emotional shifts. Drawing on psychological insights, we extend AIM by incorporating content modifications to enhance emotional impact. We introduce EmoEdit, a novel two-stage framework comprising emotion attribution and image editing. In the emotion attribution stage, we leverage a Vision-Language Model (VLM) to create hierarchies of semantic factors that represent abstract emotions. In the image editing stage, the VLM identifies the most relevant factors for the provided image, and guides a generative editing model to perform affective modifications. A ranking technique that we developed selects the best edit, balancing between emotion fidelity and structure integrity. To validate EmoEdit, we assembled a dataset of 416 images, categorized into positive, negative, and neutral classes. Our method is evaluated both qualitatively and quantitatively, demonstrating superior performance compared to existing state-of-the-art techniques. Additionally, we showcase EmoEdit's potential in various manipulation tasks, including emotion-oriented and semantics-oriented editing.
翻译:情感图像编辑(AIM)旨在修改用户提供的图像以引发特定的情感反应。由于双重目标——显著唤起预期情感的同时保留原始图像构图——该任务本身具有复杂性。现有AIM方法主要调整色彩与风格,往往难以引发精准而深刻的情感转变。借鉴心理学洞见,我们通过引入内容修改来增强情感影响,从而拓展AIM。我们提出EmoEdit,一种包含情感归因与图像编辑两阶段的全新框架。在情感归因阶段,我们利用视觉-语言模型(VLM)构建表征抽象情感的语义因子层级结构。在图像编辑阶段,VLM识别与输入图像最相关的因子,并引导生成式编辑模型执行情感化修改。我们开发了一种排序技术来选择最佳编辑结果,在情感保真度与结构完整性之间取得平衡。为验证EmoEdit,我们构建了一个包含416张图像的数据集,将其分类为积极、消极和中性三类。我们的方法在定性与定量评估中均展现出优于现有最先进技术的性能。此外,我们还展示了EmoEdit在情感导向与语义导向编辑等多种操作任务中的潜力。