Spoken term detection (STD) is often hindered by reliance on frame-level features and the computationally intensive DTW-based template matching, limiting its practicality. To address these challenges, we propose a novel approach that encodes speech into discrete, speaker-agnostic semantic tokens. This facilitates fast retrieval using text-based search algorithms and effectively handles out-of-vocabulary terms. Our approach focuses on generating consistent token sequences across varying utterances of the same term. We also propose a bidirectional state space modeling within the Mamba encoder, trained in a self-supervised learning framework, to learn contextual frame-level features that are further encoded into discrete tokens. Our analysis shows that our speech tokens exhibit greater speaker invariance than those from existing tokenizers, making them more suitable for STD tasks. Empirical evaluation on LibriSpeech and TIMIT databases indicates that our method outperforms existing STD baselines while being more efficient.
翻译:口语词检测常受限于对帧级特征的依赖以及基于动态时间规整的模板匹配所带来的高计算开销,从而制约了其实际应用。为应对这些挑战,我们提出一种新颖方法,将语音编码为离散的、与说话人无关的语义标记。这有助于利用基于文本的搜索算法进行快速检索,并能有效处理未登录词。我们的方法侧重于为同一词汇的不同发音生成一致的标记序列。我们还提出在Mamba编码器中引入双向状态空间建模,该模型在自监督学习框架下进行训练,以学习上下文相关的帧级特征,进而将其编码为离散标记。分析表明,与现有分词器相比,我们的语音标记展现出更强的说话人不变性,因而更适用于口语词检测任务。在LibriSpeech和TIMIT数据库上的实证评估表明,本方法在保持更高效率的同时,性能优于现有口语词检测基线。