Transformers have emerged as a prominent model framework for audio tagging (AT), boasting state-of-the-art (SOTA) performance on the widely-used Audioset dataset. However, their impressive performance often comes at the cost of high memory usage, slow inference speed, and considerable model delay, rendering them impractical for real-world AT applications. In this study, we introduce streaming audio transformers (SAT) that combine the vision transformer (ViT) architecture with Transformer-Xl-like chunk processing, enabling efficient processing of long-range audio signals. Our proposed SAT is benchmarked against other transformer-based SOTA methods, achieving significant improvements in terms of mean average precision (mAP) at a delay of 2s and 1s, while also exhibiting significantly lower memory usage and computational overhead. Checkpoints are publicly available https://github.com/RicherMans/SAT.
翻译:Transformer已成为音频标注(AT)领域的重要模型框架,在广泛使用的Audioset数据集上实现了最先进的性能。然而,其卓越性能往往伴随着高内存占用、推理速度慢以及显著的模型延迟,使其难以应用于实际AT场景。本研究提出流式音频Transformer(SAT),该模型将视觉Transformer(ViT)架构与类Transformer-XL的分块处理机制相结合,从而实现对长程音频信号的高效处理。我们提出的SAT模型与其他基于Transformer的最先进方法进行了基准测试,在2秒和1秒延迟条件下均实现了平均精度均值(mAP)的显著提升,同时展现出明显更低的内存占用和计算开销。模型检查点已公开于https://github.com/RicherMans/SAT。