Offline evaluation of LLMs is crucial in understanding their capacities, though current methods remain underexplored in existing research. In this work, we focus on the offline evaluation of the chain-of-thought capabilities and show how to optimize LLMs based on the proposed evaluation method. To enable offline feedback with rich knowledge and reasoning paths, we use knowledge graphs (e.g., Wikidata5m) to provide feedback on the generated chain of thoughts. Due to the heterogeneity between LLM reasoning and KG structures, direct interaction and feedback from KGs on LLM behavior are challenging, as they require accurate entity linking and grounding of LLM-generated chains of thought in the KG. To address the above challenge, we propose an offline chain-of-thought evaluation framework, OCEAN, which models chain-of-thought reasoning in LLMs as an MDP and evaluate the policy's alignment with KG preference modeling. To overcome the reasoning heterogeneity and grounding problems, we leverage on-policy KG exploration and RL to model a KG policy that generates token-level likelihood distributions for LLM-generated chain-of-thought reasoning paths, simulating KG reasoning preference. Then we incorporate the knowledge-graph feedback on the validity and alignment of the generated reasoning paths into inverse propensity scores and propose KG-IPS estimator. Theoretically, we prove the unbiasedness of the proposed KG-IPS estimator and provide a lower bound on its variance. With the off-policy evaluated value function, we can directly enable off-policy optimization to further enhance chain-of-thought alignment. Our empirical study shows that OCEAN can be efficiently optimized for generating chain-of-thought reasoning paths with higher estimated values without affecting LLMs' general abilities in downstream tasks or their internal knowledge.
翻译:LLM的离线评估对于理解其能力至关重要,然而现有研究中对当前方法的探索仍显不足。本文聚焦于思维链能力的离线评估,并展示了如何基于所提出的评估方法优化LLM。为实现具有丰富知识和推理路径的离线反馈,我们利用知识图谱(如Wikidata5m)对生成的思维链提供反馈。由于LLM推理与知识图谱结构之间存在异质性,知识图谱对LLM行为的直接交互与反馈具有挑战性,这需要精确的实体链接并将LLM生成的思维链在知识图谱中进行落地。为解决上述挑战,我们提出了离线思维链评估框架OCEAN,该框架将LLM中的思维链推理建模为马尔可夫决策过程,并评估策略与知识图谱偏好建模的对齐程度。为克服推理异质性和落地问题,我们利用在线知识图谱探索和强化学习建模知识图谱策略,该策略为LLM生成的思维链推理路径生成词元级似然分布,从而模拟知识图谱的推理偏好。随后,我们将知识图谱对生成推理路径的有效性和对齐性的反馈纳入逆倾向得分,并提出KG-IPS估计器。理论上,我们证明了所提KG-IPS估计器的无偏性,并给出了其方差的下界。借助离线策略评估的价值函数,我们可以直接进行离线策略优化,以进一步增强思维链对齐。我们的实证研究表明,OCEAN能够高效优化以生成具有更高估计值的思维链推理路径,且不影响LLM在下游任务中的通用能力或其内部知识。