Fact verification is a critical yet underexplored component of non-litigation legal practice. While existing research has examined automation in legal workflow and human-AI collaboration in high-stakes domains, little is known about how GenAI can support fact verification, a task that demands prudent judgment and strict accountability. To address this, we conducted semi-structured interviews with 18 lawyers to understand their current verification practices, attitudes toward GenAI adoption, and expectations for future systems. We found that while lawyers use GenAI for low-risk tasks like drafting and language optimization, concerns over accuracy, confidentiality, and liability are currently limiting its adoption for fact verification. These concerns translate into core design requirements for AI systems that are trustworthy and accountable. Based on these, we contribute design insights for human-AI collaboration in legal fact verification, emphasizing the development of auditable systems that balance efficiency with professional judgment and uphold ethical and legal accountability in high-stakes practice.
翻译:事实核查是非诉讼法律实践中至关重要却尚未充分探索的组成部分。尽管现有研究已探讨法律工作流程自动化及高风险领域中的人机协作,但对于生成式人工智能如何支持需要审慎判断和严格问责的事实核查任务,目前知之甚少。为此,我们对18位律师进行了半结构化访谈,以了解他们当前的核查实践、对采用生成式人工智能的态度以及对未来系统的期望。研究发现,尽管律师已将生成式人工智能用于起草和语言优化等低风险任务,但对准确性、保密性和责任问题的担忧目前限制了其应用于事实核查。这些担忧转化为对可信赖且可问责的人工智能系统的核心设计要求。基于此,我们提出了法律事实核查中人机协作的设计见解,强调开发可审计的系统,在效率与专业判断之间取得平衡,并在高风险实践中维护伦理与法律问责。