Notwithstanding their unprecedented ability to generate text, LLMs do not understand the meaning of words, have no sense of context and cannot reason. Their output constitutes an approximation of statistically dominant word patterns. And yet, the drafting of contracts is often presented as a typical legal task that could be facilitated by this technology. This paper seeks to put an end to such unreasonable ideas. Predicting words differs from using language in the circumstances of specific transactions and reconstituting common contractual phrases differs from reasoning about the law. LLMs seem to be able to generate generic and superficially plausible contractual documents. In the cold light of day, such documents may turn out to be useless assemblages of inconsistent provisions or contracts that are enforceable but unsuitable for a given transaction. This paper casts a shadow on the simplistic assumption that LLMs threaten the continued viability of the legal industry.
翻译:尽管大型语言模型在文本生成方面具有前所未有的能力,但它们并不理解词语的含义、缺乏语境感知能力且无法进行逻辑推理。其输出本质上是统计主导词序模式的近似重构。然而,合同起草常被视作可借助此项技术简化的典型法律任务。本文旨在终结此类不合理观点:词语预测不同于在特定交易情境中运用语言,重组常见合同条款亦不同于法律推理。大型语言模型或许能够生成通用且表面可信的合同文件,但在理性审视下,这些文件可能沦为矛盾条款的无意义堆砌,或是虽具法律效力却与具体交易需求不符的契约。本文对"大型语言模型威胁法律行业持续生存能力"的简化假设提出了质疑。