Knowledge augmentation has significantly enhanced the performance of Large Language Models (LLMs) in knowledge-intensive tasks. However, existing methods typically operate on the simplistic premise that model performance equates with internal knowledge, overlooking the knowledge-confidence gaps that lead to overconfident errors or uncertain truths. To bridge this gap, we propose a novel meta-cognitive framework for reliable knowledge augmentation via differentiated intervention and alignment. Our approach leverages internal cognitive signals to partition the knowledge space into mastered, confused, and missing regions, guiding targeted knowledge expansion. Furthermore, we introduce a cognitive consistency mechanism to synchronize subjective certainty with objective accuracy, ensuring calibrated knowledge boundaries. Extensive experiments demonstrate the our framework consistently outperforms strong baselines, validating its rationality in not only enhancing knowledge capabilities but also fostering cognitive behaviors that better distinguish knowns from unknowns.
翻译:知识增强显著提升了大语言模型在知识密集型任务中的性能。然而,现有方法通常基于模型性能等同于内部知识的简化前提,忽视了知识-置信度差距所导致的过度自信错误或不确定真相。为弥合这一差距,我们提出一种新颖的元认知框架,通过差异化干预与对齐实现可靠的知识增强。该方法利用内部认知信号将知识空间划分为掌握、混淆与缺失区域,从而指导定向知识扩展。此外,我们引入认知一致性机制,使主观确定性与客观准确性同步,确保校准的知识边界。大量实验表明,该框架持续优于强基线模型,验证了其不仅在增强知识能力方面的合理性,更在培养更好区分已知与未知的认知行为方面具有显著优势。