This paper presents a method for text simplification based on two neural architectures: a neural machine translation (NMT) model and a fine-tuned large language model (LLaMA). Given the scarcity of existing resources for Estonian, a new dataset was created by combining manually translated corpora with GPT-4.0-generated simplifications. OpenNMT was selected as a representative NMT-based system, while LLaMA was fine-tuned on the constructed dataset. Evaluation shows LLaMA outperforms OpenNMT in grammaticality, readability, and meaning preservation. These results underscore the effectiveness of large language models for text simplification in low-resource language settings. The complete dataset, fine-tuning scripts, and evaluation pipeline are provided in a publicly accessible supplementary package to support reproducibility and adaptation to other languages.
翻译:本文提出一种基于两种神经架构的文本简化方法:神经机器翻译模型与微调后的大语言模型。鉴于爱沙尼亚语现有资源稀缺,本研究通过结合人工翻译语料与GPT-4.0生成的简化文本构建了新数据集。选择OpenNMT作为基于神经机器翻译的代表性系统,同时将LLaMA在构建的数据集上进行微调。评估表明LLaMA在语法正确性、可读性和语义保持方面均优于OpenNMT。这些结果凸显了大语言模型在低资源语言文本简化任务中的有效性。完整的数据集、微调脚本及评估流程已通过公开可获取的补充包提供,以支持研究的可复现性及对其他语言的适配。