Legal judgments may contain errors due to the complexity of case circumstances and the abstract nature of legal concepts, while existing appellate review mechanisms face efficiency pressures from a surge in case volumes. Although current legal AI research focuses on tasks like judgment prediction and legal document generation, the task of judgment review differs fundamentally in its objectives and paradigm: it centers on detecting, classifying, and correcting errors after a judgment is issued, constituting anomaly detection rather than prediction or generation. To address this research gap, we introduce a novel task APPELLATE REVIEW, aiming to assess models' diagnostic reasoning and reliability in legal practice. We also construct a novel dataset benchmark AR-BENCH, which comprises 8,700 finely annotated decisions and 34,617 supplementary corpora. By evaluating 14 large language models, we reveal critical limitations in existing models' ability to identify legal application errors, providing empirical evidence for future improvements.
翻译:由于案件情况的复杂性及法律概念的抽象性,法律判决可能存在错误,而现有上诉审查机制则面临案件数量激增带来的效率压力。尽管当前法律人工智能研究主要关注判决预测与法律文书生成等任务,判决审查任务在目标与范式上存在根本差异:其核心在于判决作出后对错误的检测、分类与修正,属于异常检测而非预测或生成任务。为填补这一研究空白,我们提出了一项新颖的APPELLATE REVIEW任务,旨在评估模型在法律实践中的诊断推理能力与可靠性。同时,我们构建了新型数据集基准AR-BENCH,包含8,700份精细标注的判决书及34,617份辅助语料。通过对14个大型语言模型的评估,我们揭示了现有模型在法律适用错误识别能力上的关键局限,为未来改进提供了实证依据。