We present TranslateGemma, a suite of open machine translation models based on the Gemma 3 foundation models. To enhance the inherent multilingual capabilities of Gemma 3 for the translation task, we employ a two-stage fine-tuning process. First, supervised fine-tuning is performed using a rich mixture of high-quality large-scale synthetic parallel data generated via state-of-the-art models and human-translated parallel data. This is followed by a reinforcement learning phase, where we optimize translation quality using an ensemble of reward models, including MetricX-QE and AutoMQM, targeting translation quality. We demonstrate the effectiveness of TranslateGemma with human evaluation on the WMT25 test set across 10 language pairs and with automatic evaluation on the WMT24++ benchmark across 55 language pairs. Automatic metrics show consistent and substantial gains over the baseline Gemma 3 models across all sizes. Notably, smaller TranslateGemma models often achieve performance comparable to larger baseline models, offering improved efficiency. We also show that TranslateGemma models retain strong multimodal capabilities, with enhanced performance on the Vistra image translation benchmark. The release of the open TranslateGemma models aims to provide the research community with powerful and adaptable tools for machine translation.
翻译:本文介绍TranslateGemma,一套基于Gemma 3基础模型的开源机器翻译模型。为增强Gemma 3固有的多语言能力以胜任翻译任务,我们采用了两阶段微调流程。首先,利用通过最先进模型生成的大规模高质量合成平行数据与人工翻译平行数据构成的丰富混合数据集进行监督微调。随后进行强化学习阶段,我们使用包含MetricX-QE和AutoMQM在内的奖励模型集成来优化翻译质量,以提升翻译品质为目标。我们在WMT25测试集的10个语言对上通过人工评估,以及在WMT24++基准测试的55个语言对上通过自动评估,验证了TranslateGemma的有效性。自动指标显示,所有规模的模型相较于基线Gemma 3模型均取得了一致且显著的性能提升。值得注意的是,较小的TranslateGemma模型通常能达到与较大基线模型相当的性能,从而提供了更高的效率。我们还证明TranslateGemma模型保留了强大的多模态能力,在Vistra图像翻译基准测试中表现出增强的性能。开源TranslateGemma模型的发布旨在为研究社区提供强大且适应性强的机器翻译工具。