The advent of Large Language Models (LLMs) has significantly reshaped the landscape of machine translation (MT), particularly for low-resource languages and domains that lack sufficient parallel corpora, linguistic tools, and computational infrastructure. This survey presents a comprehensive overview of recent progress in leveraging LLMs for MT. We analyze techniques such as few-shot prompting, cross-lingual transfer, and parameter-efficient fine-tuning that enable effective adaptation to under-resourced settings. The paper also explores synthetic data generation strategies using LLMs, including back-translation and lexical augmentation. Additionally, we compare LLM-based translation with traditional encoder-decoder models across diverse language pairs, highlighting the strengths and limitations of each. We discuss persistent challenges such as hallucinations, evaluation inconsistencies, and inherited biases while also evaluating emerging LLM-driven metrics for translation quality. This survey offers practical insights and outlines future directions for building robust, inclusive, and scalable MT systems in the era of large-scale generative models.
翻译:大型语言模型(LLMs)的出现显著重塑了机器翻译(MT)的格局,特别是对于缺乏足够平行语料库、语言工具和计算基础设施的低资源语言和领域。本综述全面概述了利用LLMs进行MT的最新进展。我们分析了诸如少样本提示、跨语言迁移和参数高效微调等技术,这些技术能够有效适应资源匮乏的环境。本文还探讨了使用LLMs的合成数据生成策略,包括回译和词汇增强。此外,我们比较了基于LLM的翻译与传统的编码器-解码器模型在不同语言对上的表现,突出了各自的优势和局限性。我们讨论了持续存在的挑战,如幻觉、评估不一致性和继承性偏见,同时评估了新兴的LLM驱动的翻译质量评估指标。本综述提供了实用的见解,并概述了在大规模生成模型时代构建稳健、包容和可扩展的MT系统的未来方向。