The gap between static benchmarks and the dynamic nature of real-world legal practice poses a key barrier to advancing legal intelligence. To this end, we introduce J1-ENVS, the first interactive and dynamic legal environment tailored for LLM-based agents. Guided by legal experts, it comprises six representative scenarios from Chinese legal practices across three levels of environmental complexity. We further introduce J1-EVAL, a fine-grained evaluation framework, designed to assess both task performance and procedural compliance across varying levels of legal proficiency. Extensive experiments on 17 LLM agents reveal that, while many models demonstrate solid legal knowledge, they struggle with procedural execution in dynamic settings. Even the SOTA model, GPT-4o, falls short of 60% overall performance. These findings highlight persistent challenges in achieving dynamic legal intelligence and offer valuable insights to guide future research.
翻译:静态基准测试与现实法律实践的动态特性之间的差距,是推进法律智能发展的关键障碍。为此,我们提出了J1-ENVS,这是首个为基于大语言模型(LLM)的智能体量身定制的交互式动态法律环境。在法律专家的指导下,该环境包含来自中国法律实践的六个代表性场景,涵盖三个环境复杂度层级。我们进一步提出了J1-EVAL,一个细粒度的评估框架,旨在评估不同法律熟练程度下的任务执行表现与程序合规性。对17个LLM智能体的大量实验表明,尽管许多模型展现出扎实的法律知识,但在动态环境下的程序执行方面仍存在困难。即使是当前最先进的模型GPT-4o,其综合表现也未达到60%。这些发现凸显了实现动态法律智能所面临的持续挑战,并为未来的研究提供了有价值的见解。