Most widely-used modern audio codecs, such as Ogg Vorbis and MP3, as well as more recent "neural" codecs like Meta's Encodec or the Descript Audio Codec are based on block-coding; audio is divided into overlapping, fixed-size "frames" which are then compressed. While they often yield excellent reproductions and can be used for downstream tasks such as text-to-audio, they do not produce an intuitive, directly-interpretable representation. In this work, we introduce a proof-of-concept audio encoder that represents audio as a sparse set of events and their times-of-occurrence. Rudimentary physics-based assumptions are used to model attack and the physical resonance of both the instrument being played and the room in which a performance occurs, hopefully encouraging a sparse, parsimonious, and easy-to-interpret representation.
翻译:目前广泛使用的现代音频编解码器,例如 Ogg Vorbis 和 MP3,以及更近期的“神经”编解码器,如 Meta 的 Encodec 或 Descript Audio Codec,均基于分块编码;音频被划分为重叠的、固定大小的“帧”,然后进行压缩。尽管这些方法通常能产生出色的重构效果,并可用于文本到音频等下游任务,但它们无法生成直观、可直接解释的表示形式。在本研究中,我们提出了一种概念验证音频编码器,该编码器将音频表示为稀疏的事件集及其发生时间。我们利用基于物理的初步假设来模拟起音过程、演奏乐器的物理共振以及表演所在房间的声学特性,以期鼓励生成稀疏、简约且易于解释的表示形式。