This paper introduces a novel combination of two tasks, previously treated separately: acoustic-to-articulatory speech inversion (AAI) and phoneme-to-articulatory (PTA) motion estimation. We refer to this joint task as acoustic phoneme-to-articulatory speech inversion (APTAI) and explore two different approaches, both working speaker- and text-independently during inference. We use a multi-task learning setup, with the end-to-end goal of taking raw speech as input and estimating the corresponding articulatory movements, phoneme sequence, and phoneme alignment. While both proposed approaches share these same requirements, they differ in their way of achieving phoneme-related predictions: one is based on frame classification, the other on a two-staged training procedure and forced alignment. We reach competitive performance of 0.73 mean correlation for the AAI task and achieve up to approximately 87% frame overlap compared to a state-of-the-art text-dependent phoneme force aligner.
翻译:本文提出了一种新颖的任务组合,将先前分别处理的声学-发音语音反演与音素-发音动作估计两项任务相结合。我们将此联合任务称为声学音素-发音语音反演,并探索了两种不同的实现方法,两种方法在推理阶段均无需说话人信息与文本信息。我们采用多任务学习框架,以端到端的方式将原始语音作为输入,估计对应的发音动作、音素序列及音素对齐。虽然两种方法具有相同的任务要求,但其实现音素相关预测的途径存在差异:一种基于帧分类方法,另一种采用两阶段训练流程与强制对齐技术。我们在声学-发音反演任务中取得了0.73的平均相关系数,与当前最先进的文本相关音素强制对齐器相比,帧重叠率最高可达约87%。