Readability-controlled text simplification (RCTS) rewrites texts to lower readability levels while preserving their meaning. RCTS models often depend on parallel corpora with readability annotations on both source and target sides. Such datasets are scarce and difficult to curate, especially at the sentence level. To reduce reliance on parallel data, we explore using instruction-tuned large language models for zero-shot RCTS. Through automatic and manual evaluations, we examine: (1) how different types of contextual information affect a model's ability to generate sentences with the desired readability, and (2) the trade-off between achieving target readability and preserving meaning. Results show that all tested models struggle to simplify sentences (especially to the lowest levels) due to models' limitations and characteristics of the source sentences that impede adequate rewriting. Our experiments also highlight the need for better automatic evaluation metrics tailored to RCTS, as standard ones often misinterpret common simplification operations, and inaccurately assess readability and meaning preservation.
翻译:可读性控制文本简化(RCTS)旨在将文本改写至较低可读性级别,同时保持其原意。RCTS模型通常依赖于在源文本和目标文本两侧均带有可读性标注的平行语料库。此类数据集稀缺且难以构建,尤其在句子层面。为减少对平行数据的依赖,我们探索使用指令微调的大型语言模型进行零样本RCTS。通过自动与人工评估,我们研究了:(1)不同类型的上下文信息如何影响模型生成具有目标可读性句子的能力;(2)实现目标可读性与保持原意之间的权衡。结果表明,所有测试模型均难以有效简化句子(尤其是至最低可读性级别),这源于模型自身的局限性以及源句阻碍充分改写的特性。我们的实验同时指出,需要为RCTS定制更优的自动评估指标,因为标准指标常误判常见的简化操作,且无法准确评估可读性与意义保持度。