Unnatural text correction aims to automatically detect and correct spelling errors or adversarial perturbation errors in sentences. Existing methods typically rely on fine-tuning or adversarial training to correct errors, which have achieved significant success. However, these methods exhibit poor generalization performance due to the difference in data distribution between training data and real-world scenarios, known as the exposure bias problem. In this paper, we propose a self-correct adversarial training framework for \textbf{L}earn\textbf{I}ng from \textbf{MI}s\textbf{T}akes (\textbf{LIMIT}), which is a task- and model-independent framework to correct unnatural errors or mistakes. Specifically, we fully utilize errors generated by the model that are actively exposed during the inference phase, i.e., predictions that are inconsistent with the target. This training method not only simulates potential errors in real application scenarios, but also mitigates the exposure bias of the traditional training process. Meanwhile, we design a novel decoding intervention strategy to maintain semantic consistency. Extensive experimental results on Chinese unnatural text error correction datasets show that our proposed method can correct multiple forms of errors and outperforms the state-of-the-art text correction methods. In addition, extensive results on Chinese and English datasets validate that LIMIT can serve as a plug-and-play defense module and can extend to new models and datasets without further training.
翻译:非自然文本纠错旨在自动检测并修正句子中的拼写错误或对抗性扰动错误。现有方法通常依赖微调或对抗训练进行纠错,已取得显著成功。然而,由于训练数据与真实场景间存在数据分布差异(即暴露偏差问题),这些方法表现出较差的泛化性能。本文提出一种自校正对抗训练框架 \textbf{LIMIT}(\textbf{L}earn\textbf{I}ng from \textbf{MI}s\textbf{T}akes),该框架独立于具体任务与模型,可用于纠正非自然错误。具体而言,我们充分利用模型在推理阶段主动暴露的生成错误,即与目标不一致的预测结果。这种训练方法不仅模拟了实际应用场景中的潜在错误,同时缓解了传统训练过程的暴露偏差。此外,我们设计了一种新颖的解码干预策略以保持语义一致性。在中文非自然文本纠错数据集上的大量实验结果表明,所提方法能够纠正多种形式的错误,且性能优于当前最先进的文本纠错方法。进一步在中英文数据集上的扩展实验验证了LIMIT可作为即插即用的防御模块,无需额外训练即可扩展至新模型与数据集。