Recent work reports gains in neural text-to-speech (TTS) with Group Relative Policy Optimization (GRPO). However, in the absence of a verifiable reward for \textit{prosody}, GRPO trained on transcription-oriented signals (CER/NLL) lowers error rates yet collapses prosody into monotone, unnatural speech; adding speaker-similarity further destabilizes training and degrades CER. We address this with an \textit{iterative Direct Preference Optimization (DPO)} scheme that uses only a few hundred human-labeled preference pairs per round to directly optimize prosodic naturalness while regularizing to the current model. On \textbf{KoCC-TTS}, a curated dataset of authentic Korean call center interactions capturing task-oriented dialogues, our method attains the highest human preference (ELO) with competitive CER, outperforming GRPO and strong commercial baselines. These results suggest that when prosody cannot be rewarded automatically, \textit{human preference optimization} offers a practical and data-efficient path to natural and robust TTS. The demo page is available at \href{https://tts.ch.dev}
翻译:近期研究报道了基于组相对策略优化(GRPO)的神经文本转语音(TTS)系统性能提升。然而,由于缺乏针对*韵律*的可验证奖励信号,基于转录导向指标(CER/NLL)训练的GRPO虽能降低错误率,却导致韵律坍缩为单调、不自然的语音;添加说话人相似性目标会进一步破坏训练稳定性并恶化CER。为解决此问题,我们提出一种*迭代式直接偏好优化(DPO)*方案,该方案每轮仅需数百条人工标注的偏好配对数据,在正则化当前模型的同时直接优化韵律自然度。在**KoCC-TTS**(一个包含任务导向对话的真实韩语客服交互精选数据集)上,我们的方法在保持竞争力CER的同时获得了最高的人类偏好评分(ELO),其表现优于GRPO及多个强商业基线系统。这些结果表明,当韵律无法通过自动奖励机制优化时,*人类偏好优化*为实现自然且鲁棒的TTS系统提供了一条实用且数据高效的路径。演示页面详见\href{https://tts.ch.dev}