Due to their length and complexity, long regulatory texts are challenging to summarize. To address this, a multi-step extractive-abstractive architecture is proposed to handle lengthy regulatory documents more effectively. In this paper, we show that the effectiveness of a two-step architecture for summarizing long regulatory texts varies significantly depending on the model used. Specifically, the two-step architecture improves the performance of decoder-only models. For abstractive encoder-decoder models with short context lengths, the effectiveness of an extractive step varies, whereas for long-context encoder-decoder models, the extractive step worsens their performance. This research also highlights the challenges of evaluating generated texts, as evidenced by the differing results from human and automated evaluations. Most notably, human evaluations favoured language models pretrained on legal text, while automated metrics rank general-purpose language models higher. The results underscore the importance of selecting the appropriate summarization strategy based on model architecture and context length.
翻译:由于长篇幅法规文本的长度和复杂性,对其进行摘要生成具有挑战性。为此,本文提出一种多步骤抽取-生成式架构,以更有效地处理冗长的法规文档。本文研究表明,用于总结长篇幅法规文本的两步架构的有效性,因所使用的模型不同而存在显著差异。具体而言,两步架构提升了仅解码器模型的性能。对于具有短上下文长度的生成式编码器-解码器模型,抽取步骤的有效性表现不一;而对于长上下文编码器-解码器模型,抽取步骤反而会降低其性能。本研究还凸显了评估生成文本的挑战,人类评估与自动化评估结果的差异即证明了这一点。最值得注意的是,人类评估更青睐在法学文本上预训练的语言模型,而自动化指标则对通用语言模型评价更高。这些结果强调了根据模型架构和上下文长度选择适当摘要策略的重要性。