Large Language Models (LLMs) have demonstrated strong performance across a wide range of NLP tasks. However, they often exhibit suboptimal behaviors and inconsistencies when exposed to unfamiliar external information, underscoring their limitations in effectively leveraging such knowledge. Inspired by constructivist learning theory, we propose ThinkNote, a novel framework that enhances the external knowledge utilization of LLMs through a two-stage constructivist cognitive modeling process. Specifically, ThinkNote performs knowledge assimilation to align new information with the model's parametric memory, forming a coherent internal representation. It then applies thought accommodation to adapt internal reasoning, thereby promoting more consistent and reliable outputs. Extensive experimental results demonstrate that ThinkNote achieves a 10% improvement over strong baseline methods on various question-answering benchmarks. Further analysis indicates that ThinkNote effectively integrates and utilizes external knowledge to help LLMs generate accurate responses and improves their self-consistency. All data and codes are available at https://github.com/OpenMatch/ThinkNote.
翻译:大语言模型(LLMs)在广泛的自然语言处理任务中展现出强大的性能。然而,当面对不熟悉的外部信息时,它们常常表现出次优行为和不一致性,这凸显了其在有效利用此类知识方面的局限性。受建构主义学习理论的启发,我们提出了ThinkNote,一个通过两阶段建构主义认知建模过程来增强LLMs外部知识利用能力的新颖框架。具体而言,ThinkNote执行知识同化,将新信息与模型的参数记忆对齐,形成连贯的内部表征。随后,它应用思维顺应来调整内部推理,从而促进更一致和可靠的输出。大量的实验结果表明,ThinkNote在各种问答基准测试中比强大的基线方法实现了10%的性能提升。进一步的分析表明,ThinkNote有效地整合并利用了外部知识,帮助LLMs生成准确的响应,并提高了其自我一致性。所有数据和代码均可在 https://github.com/OpenMatch/ThinkNote 获取。