Generative AI tools are increasingly used for legal tasks, including legal research, drafting documents, and even for legal decision-making. As for other purposes, the use of GenAI in the legal domain comes with various risks and benefits that needs to be properly managed to ensure implementation in a way that serves public values and protect human rights. While the EU mandates risk assessment and audits before market introduction for some use cases (e.g., use by judges for administration of justice) other use cases do not fall under the AI Acts' high-risk classifications (e.g., use by citizens for legal consultation or drafting documents). Further, current risk management practices prioritize expert judgment on risk factor identification and prioritization without a corresponding legal requirement to consult with affected communities. Seeing the societal importance of the legal sector and the potentially transformative impact of GenAI in this sector, the acceptability and legitimacy of GenAI solutions also depends on public perceptions and a better understanding of the risks and benefits citizens associated with the use of AI in the legal sector. As a response, this papers presents data from a representative sample of German citizens (n=488) outlining citizens' perspectives on the use of GenAI for two legal tasks: legal consultation and legal mediation. Concretely, we i) systematically map risks and benefit factors for both legal tasks, ii) describe predictors that influence risk acceptance of the use of GenAI for those tasks, and iii) highlight emerging trade-off themes that citizens engage in when weighing up risk acceptability. Our results provides an empirical overview of citizens' concerns regarding risk management of GenAI for the legal domain, foregrounding critical themes that complement current risk assessment procedures.
翻译:生成式人工智能工具正日益被应用于法律任务中,包括法律研究、文件起草,甚至法律决策。与其他用途一样,生成式人工智能在法律领域的使用伴随着多种风险和收益,需要妥善管理,以确保其实施方式符合公共价值观并保护人权。尽管欧盟针对某些特定用例(例如,法官用于司法行政)要求在上市前进行风险评估和审计,但其他用例(例如,公民用于法律咨询或文件起草)并未被归类为《人工智能法案》中的高风险类别。此外,当前的风险管理实践优先依赖专家判断来进行风险因素的识别和排序,而没有相应的法律要求去征询受影响群体的意见。鉴于法律领域的社会重要性以及生成式人工智能在该领域潜在的变革性影响,生成式人工智能解决方案的可接受性和合法性也取决于公众的看法,以及更好地理解公民如何看待人工智能在法律领域使用的风险和收益。为此,本文提供了一项具有代表性的德国公民样本(n=488)的数据,概述了公民对生成式人工智能用于两项法律任务——法律咨询和法律调解——的看法。具体而言,我们 i) 系统性地梳理了这两项法律任务的风险和收益因素,ii) 描述了影响公民对这些任务中使用生成式人工智能的风险接受度的预测因素,以及 iii) 强调了公民在权衡风险可接受性时所涉及的新兴权衡主题。我们的研究结果从实证角度概述了公民对法律领域生成式人工智能风险管理的关切,凸显了能够补充当前风险评估程序的关键主题。