Due to their length and complexity, long regulatory texts are challenging to summarize. To address this, a multi-step extractive-abstractive architecture is proposed to handle lengthy regulatory documents more effectively. In this paper, we show that the effectiveness of a two-step architecture for summarizing long regulatory texts varies significantly depending on the model used. Specifically, the two-step architecture improves the performance of decoder-only models. For abstractive encoder-decoder models with short context lengths, the effectiveness of an extractive step varies, whereas for long-context encoder-decoder models, the extractive step worsens their performance. This research also highlights the challenges of evaluating generated texts, as evidenced by the differing results from human and automated evaluations. Most notably, human evaluations favoured language models pretrained on legal text, while automated metrics rank general-purpose language models higher. The results underscore the importance of selecting the appropriate summarization strategy based on model architecture and context length.
翻译:由于篇幅冗长且结构复杂,长篇幅法规文本的摘要生成具有挑战性。为此,本文提出一种多步骤抽取-生成式架构,以更有效地处理冗长的法规文档。本文研究表明,用于长法规文本摘要的两步架构其有效性随所用模型的不同而呈现显著差异。具体而言,两步架构能提升仅解码器模型的性能。对于上下文长度较短的生成式编码器-解码器模型,抽取步骤的有效性存在波动;而对于长上下文编码器-解码器模型,抽取步骤反而会降低其性能。本研究还揭示了生成文本评估面临的挑战,人工评估与自动化评估的结果差异即证明了这一点。最值得注意的是,人工评估更倾向于在法学文本上预训练的语言模型,而自动化指标则对通用语言模型给予更高评价。这些结果凸显了根据模型架构和上下文长度选择适当摘要策略的重要性。