We develop a new perspective of knowledge editing for large language models (LLMs) as decoding with constraints. We propose DeepEdit (Depth-first Search based Progressive Decoding for Knowledge Editing), a neuro-symbolic method that improves knowledge editing with better coherence of reasoning, relevance to the question, and awareness of updated knowledge. DeepEdit can be flexibly applied to all black-box LLMs: it does not require any access to the model parameters, representations, or output vocabulary distributions. DeepEdit progressively produces the high-quality reasoning steps towards effective knowledge editing. It utilizes a depth-first search to revise the LLMs' output, which improves the output's informativeness to the input question and awareness of the updated knowledge. Qualitatively, DeepEdit effectively controls LLMs to produce more succinct reasoning in accord with knowledge editing. Quantitatively, DeepEdit yields significant gains on MQuaKE, a challenging multi-hop question-answering dataset with knowledge editing. We release the source code at https://github.com/wangywUST/DeepEdit.
翻译:我们提出了知识编辑的新视角,将其视为带约束的解码过程。为此,我们设计了DeepEdit(基于深度优先搜索的知识编辑渐进式解码方法),这是一种神经符号方法,能增强推理连贯性、问题相关性以及对更新知识的感知能力,从而改进知识编辑效果。DeepEdit可灵活应用于所有黑盒大语言模型:无需访问模型参数、表征或输出词汇分布。该方法逐步生成高质量推理步骤以实现有效的知识编辑,通过深度优先搜索修正大语言模型的输出,提升输出对输入问题的信息增益以及对更新知识的感知力。定性分析表明,DeepEdit能有效引导大语言模型生成更简洁且符合知识编辑要求的推理过程;定量实验则显示,在包含知识编辑的挑战性多跳问答数据集MQuaKE上,该方法取得了显著性能提升。相关源代码已开源至https://github.com/wangywUST/DeepEdit。