Recent advances in Large Audio Language Models (LALMs) have extended Text-to-Speech (TTS) to interactive role-play scenarios, which demand high expressiveness and strict adherence to role-play instructions. However, existing models struggle to maintain stylistic consistency with character profiles and scene descriptions across multi-turn dialogues. A critical bottleneck is the lack of objective metrics for quantifying speaking style. To bridge this gap, we propose Mean Continuation Log-Probability (MCLP) as both an evaluation metric and a reward signal, validated on LALM-based Role-Play TTS (RP-TTS) tasks. Critically, we leverage the In-Context Learning capability of pre-trained LALMs to formulate MCLP via a continuation log-probability prediction. This metric quantifies stylistic consistency by measuring the likelihood of the ground-truth speech conditioned on the generated speech. Furthermore, we employ MCLP as a reinforcement learning reward to enhance the style alignment between generated speech and Role-Play instructions. To facilitate evaluation, we construct an RP-TTS dataset with rich scene and character annotations. Experimental results demonstrate that our method significantly outperforms strong LALM baselines on both objective and subjective metrics.
翻译:大音频语言模型(LALMs)的最新进展已将文本转语音(TTS)技术扩展至交互式角色扮演场景,此类场景要求高度的表达力与对角色扮演指令的严格遵循。然而,现有模型在多轮对话中难以保持与角色设定及场景描述的风格一致性。一个关键瓶颈在于缺乏量化说话风格的客观指标。为弥补这一空白,我们提出平均延续对数概率(MCLP)作为评估指标与奖励信号,并在基于LALM的角色扮演TTS(RP-TTS)任务上进行了验证。核心在于,我们利用预训练LALM的上下文学习能力,通过延续对数概率预测来构建MCLP。该指标通过计算在生成语音条件下真实语音的似然度来量化风格一致性。此外,我们将MCLP用作强化学习奖励,以增强生成语音与角色扮演指令之间的风格对齐。为便于评估,我们构建了一个包含丰富场景与角色标注的RP-TTS数据集。实验结果表明,我们的方法在客观与主观指标上均显著优于现有的大音频语言模型基线。