End-to-end speech-to-speech translation (S2ST) systems typically struggle with a critical data bottleneck: the scarcity of parallel speech-to-speech corpora. To overcome this, we introduce RosettaSpeech, a novel zero-shot framework trained exclusively on monolingual speech-text data augmented by machine translation supervision. Unlike prior works that rely on complex cascaded pseudo-labeling, our approach strategically utilizes text as a semantic bridge during training to synthesize translation targets, thereby eliminating the need for parallel speech pairs while maintaining a direct, end-to-end inference pipeline. Empirical evaluations on the CVSS-C benchmark demonstrate that RosettaSpeech achieves state-of-the-art zero-shot performance, surpassing leading baselines by significant margins - achieving ASR-BLEU scores of 25.17 for German-to-English (+27% relative gain) and 29.86 for Spanish-to-English (+14%). Crucially, our model effectively preserves the source speaker's voice without ever seeing paired speech data. We further analyze the impact of data scaling and demonstrate the model's capability in many-to-one translation, offering a scalable solution for extending high-quality S2ST to "text-rich, speech-poor" languages.
翻译:端到端语音到语音翻译系统通常面临一个关键的数据瓶颈:平行语音到语音语料的稀缺性。为克服此问题,我们提出罗塞塔语音——一种新颖的零样本框架,该框架仅利用经机器翻译监督增强的单语种语音-文本数据进行训练。与先前依赖复杂级联伪标注的研究不同,我们的方法在训练中策略性地将文本作为语义桥梁来合成翻译目标,从而在保持直接端到端推理流程的同时,消除了对平行语音对的需求。在CVSS-C基准测试上的实证评估表明,罗塞塔语音实现了最先进的零样本性能,以显著优势超越主流基线模型——德语到英语翻译获得25.17的ASR-BLEU分数(相对提升27%),西班牙语到英语获得29.86分(相对提升14%)。关键在于,我们的模型在从未接触配对语音数据的情况下,仍能有效保留源说话者的音色特征。我们进一步分析了数据规模化的影响,并展示了模型在“多对一”翻译任务中的能力,为将高质量语音到语音翻译扩展至“文本丰富、语音匮乏”的语言提供了可扩展的解决方案。