While decoder-only Large Language Models (LLMs) have recently dominated the NLP landscape, encoder-only architectures remain a cost-effective and parameter-efficient standard for discriminative tasks. However, classic encoders like BERT are limited by a short context window, which is insufficient for processing long documents. In this paper, we address this limitation for the Polish by introducing a high-quality Polish model capable of processing sequences of up to 8192 tokens. The model was developed by employing a two-stage training procedure that involves positional embedding adaptation and full parameter continuous pre-training. Furthermore, we propose compressed model variants trained via knowledge distillation. The models were evaluated on 25 tasks, including the KLEJ benchmark, a newly introduced financial task suite (FinBench), and other classification and regression tasks, specifically those requiring long-document understanding. The results demonstrate that our model achieves the best average performance among Polish and multilingual models, significantly outperforming competitive solutions in long-context tasks while maintaining comparable quality on short texts.
翻译:尽管仅解码器架构的大型语言模型(LLMs)近期主导了自然语言处理领域,但仅编码器架构在判别性任务中仍是成本效益高且参数效率优越的标准方案。然而,经典编码器(如BERT)受限于较短的上下文窗口,难以处理长文档。本文针对波兰语解决了这一局限,提出了能够处理长达8192个词元序列的高质量波兰语模型。该模型通过两阶段训练流程开发,包括位置嵌入适配和全参数持续预训练。此外,我们提出了通过知识蒸馏训练的压缩模型变体。模型在25项任务上进行了评估,涵盖KLEJ基准测试、新引入的金融任务集(FinBench)以及其他分类与回归任务,特别是需要长文档理解的任务。实验结果表明,我们的模型在波兰语及多语言模型中取得了最佳平均性能,在长上下文任务上显著优于竞争方案,同时在短文本处理上保持了相当的质量。