TSE(Target Speaker Extraction) aims to extract the clean speech of the target speaker in an audio mixture, thus eliminating irrelevant background noise and speech. While prior work has explored various auxiliary cues including pre-recorded speech, visual information (e.g., lip motions and gestures), and spatial information, the acquisition and selection of such strong cues are infeasible in many practical scenarios. Unlike all existing work, in this paper, we condition the TSE algorithm on semantic cues extracted from limited and unaligned text content, such as condensed points from a presentation slide. This method is particularly useful in scenarios like meetings, poster sessions, or lecture presentations, where acquiring other cues in real-time is challenging. To this end, we design two different networks. Specifically, our proposed TPE fuses audio features with content-based semantic cues to facilitate time-frequency mask generation to filter out extraneous noise, while another proposal, namely TSR, employs the contrastive learning technique to associate blindly separated speech signals with semantic cues. The experimental results show the efficacy in accurately identifying the target speaker by utilizing semantic cues derived from limited and unaligned text, resulting in SI-SDRi of 12.16 dB, SDRi of 12.66 dB, PESQi of 0.830 and STOIi of 0.150, respectively. Dataset and source code will be publicly available. Project demo page: https://slideTSE.github.io/.
翻译:目标说话人提取旨在从音频混合信号中提取目标说话人的纯净语音,从而消除无关的背景噪声和语音。尽管先前的研究已探索了多种辅助线索,包括预录语音、视觉信息(如唇部动作和手势)以及空间信息,但在许多实际场景中获取和选择此类强线索并不可行。与现有所有工作不同,本文基于从有限且未对齐的文本内容(如演示幻灯片中的要点)中提取的语义线索来条件化TSE算法。该方法特别适用于会议、海报展示或讲座演示等场景,其中实时获取其他线索具有挑战性。为此,我们设计了两种不同的网络。具体而言,我们提出的TPE网络将音频特征与基于内容的语义线索相融合,以促进时频掩码的生成,从而滤除外部噪声;而另一种方案TSR则采用对比学习技术,将盲分离的语音信号与语义线索相关联。实验结果表明,利用从有限且未对齐文本中衍生的语义线索能够有效准确识别目标说话人,分别实现了12.16 dB的SI-SDRi、12.66 dB的SDRi、0.830的PESQi和0.150的STOIi。数据集与源代码将公开提供。项目演示页面:https://slideTSE.github.io/。