Large Language Models (LLMs) could struggle to fully understand legal theories and perform complex legal reasoning tasks. In this study, we introduce a challenging task (confusing charge prediction) to better evaluate LLMs' understanding of legal theories and reasoning capabilities. We also propose a novel framework: Multi-Agent framework for improving complex Legal Reasoning capability (MALR). MALR employs non-parametric learning, encouraging LLMs to automatically decompose complex legal tasks and mimic human learning process to extract insights from legal rules, helping LLMs better understand legal theories and enhance their legal reasoning abilities. Extensive experiments on multiple real-world datasets demonstrate that the proposed framework effectively addresses complex reasoning issues in practical scenarios, paving the way for more reliable applications in the legal domain.
翻译:大型语言模型(LLMs)在深入理解法律理论及执行复杂法律推理任务方面可能存在局限。本研究引入一项具有挑战性的任务(混淆罪名预测)以更准确地评估LLMs对法律理论的理解与推理能力。同时,我们提出一种创新框架:提升复杂法律推理能力的多智能体框架(MALR)。该框架采用非参数学习方法,引导LLMs自动分解复杂法律任务,并通过模拟人类学习过程从法律规则中提取关键洞见,从而帮助LLMs深化对法律理论的理解并提升其法律推理能力。在多个真实数据集上的大量实验表明,所提框架能有效解决实际场景中的复杂推理问题,为法律领域更可靠的应用开辟了新路径。