Chain-of-Thought (CoT) guides large language models (LLMs) to reason step-by-step, and can motivate their logical reasoning ability. While effective for logical tasks, CoT is not conducive to creative problem-solving which often requires out-of-box thoughts and is crucial for innovation advancements. In this paper, we explore the Leap-of-Thought (LoT) abilities within LLMs -- a non-sequential, creative paradigm involving strong associations and knowledge leaps. To this end, we study LLMs on the popular Oogiri game which needs participants to have good creativity and strong associative thinking for responding unexpectedly and humorously to the given image, text, or both, and thus is suitable for LoT study. Then to investigate LLMs' LoT ability in the Oogiri game, we first build a multimodal and multilingual Oogiri-GO dataset which contains over 130,000 samples from the Oogiri game, and observe the insufficient LoT ability or failures of most existing LLMs on the Oogiri game. Accordingly, we introduce a creative Leap-of-Thought (CLoT) paradigm to improve LLM's LoT ability. CLoT first formulates the Oogiri-GO dataset into LoT-oriented instruction tuning data to train pretrained LLM for achieving certain LoT humor generation and discrimination abilities. Then CLoT designs an explorative self-refinement that encourages the LLM to generate more creative LoT data via exploring parallels between seemingly unrelated concepts and selects high-quality data to train itself for self-refinement. CLoT not only excels in humor generation in the Oogiri game but also boosts creative abilities in various tasks like cloud guessing game and divergent association task. These findings advance our understanding and offer a pathway to improve LLMs' creative capacities for innovative applications across domains. The dataset, code, and models will be released online. https://zhongshsh.github.io/CLoT/.
翻译:链式思维(CoT)引导大语言模型逐步推理,可激发其逻辑推理能力。虽然CoT在逻辑任务中表现有效,但不利于需要突破常规思维的创造性问题解决——这种能力对创新进步至关重要。本文探索了大语言模型中的跳跃思维(LoT)能力——一种涉及强关联和知识跳跃的非线性创造性思维范式。为此,我们以广受欢迎的Oogiri游戏为研究载体,该游戏要求参与者具备优秀创造力和强联想能力,对给定图像、文本或图文组合做出出人意料且幽默的回应,因此适合研究LoT。为探究LLM在Oogiri游戏中的LoT能力,我们首先构建了包含逾13万条游戏样本的多模态多语言Oogiri-GO数据集,观察到现有大语言模型在该游戏中普遍存在LoT能力不足或失败现象。据此,我们提出创造性跳跃思维(CLoT)范式以提升LLM的LoT能力。CLoT首先将Oogiri-GO数据集转化为面向LoT的指令微调数据,用于训练预训练模型使其具备特定LoT幽默生成与判别能力。继而设计探索性自优化机制,通过引导模型探索看似无关概念间的关联来生成更具创造性的LoT数据,并筛选高质量数据进行自我迭代优化。CLoT不仅在Oogiri游戏幽默生成任务中表现优异,还能显著提升云猜谜游戏、发散联想任务等多种场景的创造能力。这些发现深化了我们对LLM创造力的理解,并为跨领域创新应用的开发提供了可行路径。数据集、代码及模型将开源在 https://zhongshsh.github.io/CLoT/。