Chain-of-thought (CoT) prompting has become a widely used strategy for working with large language and multimodal models. While CoT has been shown to improve performance across many tasks, determining the settings in which it is effective remains an ongoing effort. In particular, it is still an open question in what settings CoT systematically reduces model performance. In this paper, we seek to identify the characteristics of tasks where CoT reduces performance by drawing inspiration from cognitive psychology, looking at cases where (i) verbal thinking or deliberation hurts performance in humans, and (ii) the constraints governing human performance generalize to language models. Three such cases are implicit statistical learning, visual recognition, and classifying with patterns containing exceptions. In extensive experiments across all three settings, we find that a diverse collection of state-of-the-art models exhibit significant drop-offs in performance (e.g., up to 36.3% absolute accuracy for OpenAI o1-preview compared to GPT-4o) when using inference-time reasoning compared to zero-shot counterparts. We also identify three tasks that satisfy condition (i) but not (ii), and find that while verbal thinking reduces human performance in these tasks, CoT retains or increases model performance. Overall, our results show that while there is not an exact parallel between the cognitive processes of models and those of humans, considering cases where thinking has negative consequences for human performance can help us identify settings where it negatively impacts models. By connecting the literature on human deliberation with evaluations of CoT, we offer a new tool that can be used in understanding the impact of prompt choices and inference-time reasoning.
翻译:思维链(CoT)提示已成为处理大型语言和多模态模型时广泛采用的策略。尽管CoT已被证明能提升众多任务的表现,但其有效适用场景的界定仍是持续探索的课题。尤其值得关注的是,在何种场景下CoT会系统性降低模型性能仍是一个开放性问题。本文通过借鉴认知心理学的研究视角,旨在识别CoT导致性能下降的任务特征,重点关注以下两类情况:(i)言语化思考或审慎推理会损害人类表现的任务;(ii)制约人类表现的约束条件可泛化至语言模型的任务。我们聚焦于内隐统计学习、视觉识别以及包含例外模式的分类任务这三类典型场景。在涵盖所有三类场景的大规模实验中,我们发现多种前沿模型(例如OpenAI o1-preview相比GPT-4o的绝对准确率下降达36.3%)在使用推理时思考机制时,相比零样本基线均出现显著性能衰退。同时,我们识别出三类满足条件(i)但不满足条件(ii)的任务,并发现虽然言语化思考会降低人类在这些任务中的表现,但CoT却能维持或提升模型性能。总体而言,我们的研究结果表明:尽管模型与人类的认知过程并非完全对应,但考察思考对人类表现产生负面影响的案例,有助于我们发现思考同样会损害模型表现的场景。通过将人类审慎推理研究文献与CoT评估体系相连接,我们为理解提示选择与推理时思考机制的影响提供了新的分析工具。