Large language models (LLMs) are being increasingly integrated into legal applications, including judicial decision support, legal practice assistance, and public-facing legal services. While LLMs show strong potential in handling legal knowledge and tasks, their deployment in real-world legal settings raises critical concerns beyond surface-level accuracy, involving the soundness of legal reasoning processes and trustworthy issues such as fairness and reliability. Systematic evaluation of LLM performance in legal tasks has therefore become essential for their responsible adoption. This survey identifies key challenges in evaluating LLMs for legal tasks grounded in real-world legal practice. We analyze the major difficulties involved in assessing LLM performance in the legal domain, including outcome correctness, reasoning reliability, and trustworthiness. Building on these challenges, we review and categorize existing evaluation methods and benchmarks according to their task design, datasets, and evaluation metrics. We further discuss the extent to which current approaches address these challenges, highlight their limitations, and outline future research directions toward more realistic, reliable, and legally grounded evaluation frameworks for LLMs in legal domains.
翻译:大型语言模型正日益被整合到法律应用中,包括司法决策支持、法律实务辅助和面向公众的法律服务。尽管LLMs在处理法律知识和任务方面展现出强大潜力,但其在现实法律场景中的部署引发了超越表面准确性的关键关切,涉及法律推理过程的健全性以及公平性、可靠性等可信赖性问题。因此,对LLM在法律任务中的表现进行系统评估,对其负责任地采用至关重要。本综述基于现实法律实践,识别了评估法律任务LLM面临的关键挑战。我们分析了评估LLM在法律领域表现时涉及的主要难点,包括结果正确性、推理可靠性和可信赖性。基于这些挑战,我们根据任务设计、数据集和评估指标,对现有评估方法和基准进行了回顾与分类。我们进一步讨论了当前方法在多大程度上应对了这些挑战,指出了其局限性,并展望了未来研究方向,旨在为法律领域的LLM构建更贴近现实、更可靠且更基于法律依据的评估框架。