We introduce the Korean Canonical Legal Benchmark (KCL), a benchmark designed to assess language models' legal reasoning capabilities independently of domain-specific knowledge. KCL provides question-level supporting precedents, enabling a more faithful disentanglement of reasoning ability from parameterized knowledge. KCL consists of two components: (1) KCL-MCQA, multiple-choice problems of 283 questions with 1,103 aligned precedents, and (2) KCL-Essay, open-ended generation problems of 169 questions with 550 aligned precedents and 2,739 instance-level rubrics for automated evaluation. Our systematic evaluation of 30+ models shows large remaining gaps, particularly in KCL-Essay, and that reasoning-specialized models consistently outperform their general-purpose counterparts. We release all resources, including the benchmark dataset and evaluation code, at https://github.com/lbox-kr/kcl.
翻译:本文提出韩国规范法律基准(KCL),该基准旨在独立于领域特定知识评估语言模型的法律推理能力。KCL提供问题层面的支持性判例,从而更可靠地将推理能力与参数化知识解耦。KCL包含两个组成部分:(1)KCL-MCQA:包含283道选择题及1,103个对应判例的多选题集;(2)KCL-Essay:包含169道开放式生成题、550个对应判例以及2,739个实例级评分量表的论述题集,支持自动化评估。我们对30余个模型的系统性评估表明,现有模型仍存在显著性能差距(尤其在KCL-Essay任务中),且专精推理的模型持续优于通用模型。我们已通过https://github.com/lbox-kr/kcl公开全部资源,包括基准数据集与评估代码。