Current legal outcome prediction models - a staple of legal NLP - do not explain their reasoning. However, to employ these models in the real world, human legal actors need to be able to understand the model's decisions. In the case of common law, legal practitioners reason towards the outcome of a case by referring to past case law, known as precedent. We contend that precedent is, therefore, a natural way of facilitating explainability for legal NLP models. In this paper, we contribute a novel method for identifying the precedent employed by legal outcome prediction models. Furthermore, by developing a taxonomy of legal precedent, we are able to compare human judges and neural models with respect to the different types of precedent they rely on. We find that while the models learn to predict outcomes reasonably well, their use of precedent is unlike that of human judges.
翻译:当前的法律结果预测模型——法律自然语言处理的核心工具——无法解释其推理过程。然而,要在现实世界中应用这些模型,人类法律从业者需要能够理解模型的决策。在普通法体系中,法律从业者通过参考过往判例(即先例)来推理案件结果。我们认为,因此,先例是促进法律自然语言处理模型可解释性的一种自然方式。本文提出了一种识别法律结果预测模型所采用先例的新方法。此外,通过建立法律先例的分类体系,我们能够比较人类法官与神经模型在依赖不同类型先例方面的差异。我们发现,尽管这些模型能够较好地预测结果,但它们对先例的使用方式与人类法官截然不同。