Voiced Electromyography (EMG)-to-Speech (V-ETS) models reconstruct speech from muscle activity signals, facilitating applications such as neurolaryngologic diagnostics. Despite its potential, the advancement of V-ETS is hindered by a scarcity of paired EMG-speech data. To address this, we propose a novel Confidence-based Multi-Speaker Self-training (CoM2S) approach, along with a newly curated Libri-EMG dataset. This approach leverages synthetic EMG data generated by a pre-trained model, followed by a proposed filtering mechanism based on phoneme-level confidence to enhance the ETS model through the proposed self-training techniques. Experiments demonstrate our method improves phoneme accuracy, reduces phonological confusion, and lowers word error rate, confirming the effectiveness of our CoM2S approach for V-ETS. In support of future research, we will release the codes and the proposed Libri-EMG dataset-an open-access, time-aligned, multi-speaker voiced EMG and speech recordings.
翻译:语音肌电图(EMG)转语音(V-ETS)模型能够从肌肉活动信号中重建语音,有助于神经喉科诊断等应用。尽管潜力巨大,但V-ETS的发展因配对的肌电-语音数据稀缺而受到阻碍。为解决此问题,我们提出了一种新颖的基于置信度的多说话人自训练(CoM2S)方法,并构建了一个新的Libri-EMG数据集。该方法利用预训练模型生成的合成肌电数据,随后通过一种基于音素级置信度的过滤机制,结合所提出的自训练技术来增强ETS模型。实验表明,我们的方法提高了音素准确率,减少了音韵混淆,并降低了词错误率,证实了CoM2S方法在V-ETS任务中的有效性。为支持未来研究,我们将公开代码及所提出的Libri-EMG数据集——一个开放获取、时间对齐、包含多说话人的语音肌电与语音录音资源。