We present a neuromuscular speech interface that translates electromyographic (EMG) signals collected from orofacial muscles during speech articulation directly into audio. We show that self-supervised speech (SS) representations exhibit a strong linear relationship with the electrical power of muscle action potentials: SS features can be linearly mapped to EMG power with a correlation of $r = 0.85$. Moreover, EMG power vectors corresponding to different articulatory gestures form structured and separable clusters in feature space. This relationship: $\text{SS features}$ $\xrightarrow{\texttt{linear mapping}}$ $\text{EMG power}$ $\xrightarrow{\texttt{gesture-specific clustering}}$ $\text{articulatory movements}$, highlights that SS models implicitly encode articulatory mechanisms. Leveraging this property, we directly map EMG signals to SS feature space and synthesize speech, enabling end-to-end EMG-to-speech generation without explicit articulatory models and vocoder training.
翻译:我们提出了一种神经肌肉语音接口,可将说话过程中从口面部肌肉采集的肌电图(EMG)信号直接转换为音频。研究表明,自监督语音(SS)表示与肌肉动作电位的电功率呈现强线性关系:SS特征可通过线性映射到EMG功率,相关系数达$r = 0.85$。此外,对应不同发音姿态的EMG功率向量在特征空间中形成结构化且可分离的聚类。这种关系:$\text{SS特征}$ $\xrightarrow{\texttt{线性映射}}$ $\text{EMG功率}$ $\xrightarrow{\texttt{姿态特异性聚类}}$ $\text{发音运动}$,揭示了SS模型隐式编码了发音机制。基于此特性,我们将EMG信号直接映射到SS特征空间并合成语音,实现了无需显式发音模型和声码器训练的端到端EMG到语音生成。