Large language models demonstrate limited capability in proficiency-controlled sentence simplification, particularly when simplifying across large readability levels. We propose a framework that decomposes complex simplifications into manageable steps through dynamic path planning, semantic-aware exemplar selection, and chain-of-thought generation with conversation history for coherent reasoning. Evaluation on five languages across two benchmarks shows our approach improves simplification effectiveness while reducing computational steps by 22-42%. Human evaluation confirms the fundamental trade-off between simplification effectiveness and meaning preservation. Notably, even human annotators struggle to agree on semantic preservation judgments, highlighting the inherent complexity of this task. Our work shows that while step-by-step simplification improves control, preserving semantic fidelity during extensive simplification remains an open challenge.
翻译:大型语言模型在能力控制型句子简化任务中表现出有限性能,尤其是在跨越较大可读性级别进行简化时。本研究提出一个框架,通过动态路径规划、语义感知范例选择以及结合对话历史的思维链生成来实现连贯推理,从而将复杂简化任务分解为可管理的步骤。在跨越两个基准测试的五种语言上的评估表明,我们的方法在将计算步骤减少22-42%的同时提升了简化效果。人工评估证实了简化效果与意义保持之间存在根本性权衡。值得注意的是,即使是人类标注者在语义保持判断上也难以达成一致,这凸显了该任务固有的复杂性。我们的研究表明,虽然逐步简化能提升控制能力,但在大规模简化过程中保持语义保真度仍然是一个待解决的挑战。