Transformer-based models recently reached state-of-the-art single-channel speech separation accuracy; However, their extreme computational load makes it difficult to deploy them in resource-constrained mobile or IoT devices. We thus present Papez, a lightweight and computation-efficient single-channel speech separation model. Papez is based on three key techniques. We first replace the inter-chunk Transformer with small-sized auditory working memory. Second, we adaptively prune the input tokens that do not need further processing. Finally, we reduce the number of parameters through the recurrent transformer. Our extensive evaluation shows that Papez achieves the best resource and accuracy tradeoffs with a large margin. We publicly share our source code at \texttt{https://github.com/snuhcs/Papez}
翻译:基于Transformer的模型近期在单通道语音分离精度上达到了最先进水平;然而,其极高的计算负载使其难以部署在资源受限的移动或物联网设备中。为此,我们提出了Papez,一种轻量级且计算高效的单通道语音分离模型。Papez基于三项关键技术。首先,我们将块间Transformer替换为小尺寸的听觉工作记忆。其次,我们对无需进一步处理的输入令牌进行自适应剪枝。最后,我们通过循环Transformer减少参数量。广泛的评估表明,Papez以显著优势实现了最佳的资源与精度权衡。我们在\texttt{https://github.com/snuhcs/Papez}公开分享了源代码。