We propose WHISPER-GPT: A generative large language model (LLM) for speech and music that allows us to work with continuous audio representations and discrete tokens simultaneously as part of a single architecture. There has been a huge surge in generative audio, speech, and music models that utilize discrete audio tokens derived from neural compression algorithms, e.g. ENCODEC. However, one of the major drawbacks of this approach is handling the context length. It blows up for high-fidelity generative architecture if one has to account for all the audio contents at various frequencies for the next token prediction. By combining continuous audio representation like the spectrogram and discrete acoustic tokens, we retain the best of both worlds: Have all the information needed from the audio at a specific time instance in a single token, yet allow LLM to predict the future token to allow for sampling and other benefits discrete space provides. We show how our architecture improves the perplexity and negative log-likelihood scores for the next token prediction compared to a token-based LLM for speech and music.
翻译:我们提出 WHISPER-GPT:一种用于语音和音乐的生成式大语言模型(LLM),它允许我们在单一架构中同时处理连续音频表示和离散标记。利用源自神经压缩算法(例如 ENCODEC)的离散音频标记的生成式音频、语音和音乐模型已出现巨大增长。然而,这种方法的一个主要缺点是处理上下文长度。如果必须考虑各种频率的所有音频内容来进行下一个标记预测,那么对于高保真生成架构而言,其上下文长度会急剧膨胀。通过结合频谱图等连续音频表示和离散声学标记,我们融合了两种方法的优势:在单个标记中保留特定时间实例所需的全部音频信息,同时允许 LLM 预测未来标记,从而实现采样以及离散空间提供的其他益处。我们展示了相较于基于标记的语音和音乐 LLM,我们的架构如何改善了下一个标记预测的困惑度和负对数似然分数。