Humans interpret complex visual stimuli using abstract concepts that facilitate decision-making tasks such as food selection and risk avoidance. Similarity judgment tasks are effective for exploring these concepts. However, methods for controllable image generation in concept space are underdeveloped. In this study, we present a novel framework called CoCoG-2, which integrates generated visual stimuli into similarity judgment tasks. CoCoG-2 utilizes a training-free guidance algorithm to enhance generation flexibility. CoCoG-2 framework is versatile for creating experimental stimuli based on human concepts, supporting various strategies for guiding visual stimuli generation, and demonstrating how these stimuli can validate various experimental hypotheses. CoCoG-2 will advance our understanding of the causal relationship between concept representations and behaviors by generating visual stimuli. The code is available at \url{https://github.com/ncclab-sustech/CoCoG-2}.
翻译:人类通过抽象概念来理解复杂的视觉刺激,这些概念有助于决策任务,如食物选择和风险规避。相似性判断任务是探索这些概念的有效方法。然而,概念空间中可控图像生成的方法尚不成熟。在本研究中,我们提出了一个名为CoCoG-2的新框架,它将生成的视觉刺激整合到相似性判断任务中。CoCoG-2采用一种免训练的引导算法来增强生成的灵活性。该框架具有多功能性:可基于人类概念创建实验刺激,支持多种引导视觉刺激生成的策略,并展示这些刺激如何验证各种实验假设。通过生成视觉刺激,CoCoG-2将推进我们对概念表征与行为之间因果关系的理解。代码可在 \url{https://github.com/ncclab-sustech/CoCoG-2} 获取。