Real-time voice conversion and speaker anonymization require causal, low-latency synthesis without sacrificing intelligibility or naturalness. Current systems have a core representational mismatch: content is time-varying, while speaker identity is injected as a static global embedding. We introduce a streamable speech synthesizer that aligns the temporal granularity of identity and content via a content-synchronous, time-varying timbre (TVT) representation. A Global Timbre Memory expands a global timbre instance into multiple compact facets; frame-level content attends to this memory, a gate regulates variation, and spherical interpolation preserves identity geometry while enabling smooth local changes. In addition, a factorized vector-quantized bottleneck regularizes content to reduce residual speaker leakage. The resulting system is streamable end-to-end, with <80 ms GPU latency. Experiments show improvements in naturalness, speaker transfer, and anonymization compared to SOTA streaming baselines, establishing TVT as a scalable approach for privacy-preserving and expressive speech synthesis under strict latency budgets.
翻译:实时语音转换与说话人匿名化需要在不牺牲清晰度或自然度的前提下,实现因果、低延迟的合成。当前系统存在一个核心表征失配问题:内容是时变的,而说话人身份却以静态全局嵌入的形式注入。我们提出了一种可流式化的语音合成器,通过内容同步的时变音色表示,对齐了身份与内容的时间粒度。全局音色记忆将单个全局音色实例扩展为多个紧凑的侧面;帧级内容对该记忆进行注意力计算,一个门控机制调节变化,球面插值则在保持身份几何结构的同时实现平滑的局部变化。此外,一个分解的向量量化瓶颈对内容进行正则化,以减少残留的说话人信息泄露。最终系统可实现端到端的流式处理,GPU延迟低于80毫秒。实验表明,与最先进的流式基线相比,该系统在自然度、说话人转换和匿名化方面均有提升,确立了TVT作为一种在严格延迟预算下实现隐私保护和富有表现力的语音合成的可扩展方法。