In the past, the field of drum source separation faced significant challenges due to limited data availability, hindering the adoption of cutting-edge deep learning methods that have found success in other related audio applications. In this manuscript, we introduce StemGMD, a large-scale audio dataset of isolated single-instrument drum stems. Each audio clip is synthesized from MIDI recordings of expressive drums performances using ten real-sounding acoustic drum kits. Totaling 1224 hours, StemGMD is the largest audio dataset of drums to date and the first to comprise isolated audio clips for every instrument in a canonical nine-piece drum kit. We leverage StemGMD to develop LarsNet, a novel deep drum source separation model. Through a bank of dedicated U-Nets, LarsNet can separate five stems from a stereo drum mixture faster than real-time and is shown to significantly outperform state-of-the-art nonnegative spectro-temporal factorization methods.
翻译:过去,鼓声源分离领域因数据稀缺而面临重大挑战,这阻碍了已在其他相关音频应用中取得成功的尖端深度学习方法的采用。本文引入StemGMD——一个大规模孤立单乐器鼓声片段音频数据集。每个音频片段通过使用十套真实原声鼓组的MIDI表现性鼓乐录音合成而成。总时长1224小时的StemGMD是迄今为止最大的鼓声数据集,也是首个包含标准九件套鼓组中每种乐器独立音频片段的数据集。我们利用StemGMD开发了LarsNet——一种新型深度鼓声源分离模型。通过一组专用U-Net,LarsNet能够从立体声鼓声混合中分离出五个声部,处理速度快于实时,且被证明显著优于现有最优的非负频谱-时域因子分解方法。