We propose a large language model explainability technique for obtaining faithful natural language explanations by grounding the explanations in a reasoning process. When converted to a sequence of tokens, the outputs of the reasoning process can become part of the model context and later be decoded to natural language as the model produces either the final answer or the explanation. To improve the faithfulness of the explanations, we propose to use a joint predict-explain approach, in which the answers and explanations are inferred directly from the reasoning sequence, without the explanations being dependent on the answers and vice versa. We demonstrate the plausibility of the proposed technique by achieving a high alignment between answers and explanations in several problem domains, observing that language models often simply copy the partial decisions from the reasoning sequence into the final answers or explanations. Furthermore, we show that the proposed use of reasoning can also improve the quality of the answers.
翻译:我们提出了一种大语言模型可解释性技术,通过将解释建立在推理过程的基础上,从而获得忠实可信的自然语言解释。当推理过程的输出被转换为词元序列时,它可以成为模型上下文的一部分,随后在模型生成最终答案或解释时被解码为自然语言。为了提高解释的忠实性,我们提出了一种联合预测-解释方法,其中答案和解释直接从推理序列中推断得出,且解释不依赖于答案,反之亦然。我们在多个问题领域中实现了答案与解释之间的高度一致性,观察到语言模型通常只是简单地将推理序列中的部分决策复制到最终答案或解释中,从而证明了所提技术的合理性。此外,我们还表明,所提出的推理方法的使用也能提高答案的质量。