Large Language Models (LLMs) have revolutionized natural language processing, but their application to speech-based tasks remains challenging due to the complexities of integrating audio and text modalities. This paper introduces Ichigo, a mixed-modal model that seamlessly processes interleaved sequences of speech and text. Utilizing a tokenized early-fusion approach, Ichigo quantizes speech into discrete tokens and employs a uniform transformer-based architecture for both speech and text modalities. This method enables joint reasoning and generation across modalities without the need for separate adapters. We present a comprehensive training methodology, including pre-training on multilingual speech recognition datasets and fine-tuning on a curated instruction dataset. Ichigo demonstrates state-of-the-art performance on speech question-answering benchmarks, outperforming existing open-source speech language models and achieving comparable results to cascaded systems. Notably, Ichigo exhibits a latency of just 111 ms to first token generation, significantly lower than current models. Our approach not only advances the field of multimodal AI but also provides a framework for smaller research teams to contribute effectively to open-source speech-language models.
翻译:大型语言模型(LLM)已彻底改变自然语言处理领域,但由于音频与文本模态融合的复杂性,其在语音任务中的应用仍面临挑战。本文提出Ichigo模型,该混合模态模型能够无缝处理语音与文本的交错序列。通过采用基于标记的早期融合方法,Ichigo将语音量化为离散标记,并对语音和文本模态统一采用基于Transformer的架构。该方法无需独立适配器即可实现跨模态的联合推理与生成。我们提出了完整的训练方法,包括在多语言语音识别数据集上进行预训练,以及在精选指令数据集上进行微调。Ichigo在语音问答基准测试中展现出最先进的性能,优于现有开源语音语言模型,并达到与级联系统相当的结果。值得注意的是,Ichigo生成首个标记的延迟仅为111毫秒,显著低于当前模型。我们的研究不仅推动了多模态人工智能领域的发展,还为小型研究团队有效参与开源语音语言模型贡献提供了框架。