Significant advancements has recently been achieved in the field of multi-modal large language models (MLLMs), demonstrating their remarkable capabilities in understanding and reasoning across diverse tasks. However, these models are often trained for specific tasks and rely on task-specific input-output formats, limiting their applicability to a broader range of tasks. This raises a fundamental question: Can we develop a unified approach to represent and handle different multi-modal tasks to maximize the generalizability of MLLMs? In this paper, we propose UnifiedMLLM, a comprehensive model designed to represent various tasks using a unified representation. Our model exhibits strong capabilities in comprehending the implicit intent of user instructions and preforming reasoning. In addition to generating textual responses, our model also outputs task tokens and grounding tokens, serving as indicators of task types and task granularity. These outputs are subsequently routed through the task router and directed to specific expert models for task completion. To train our model, we construct a task-specific dataset and an 100k multi-task dataset encompassing complex scenarios. Employing a three-stage training strategy, we equip our model with robust reasoning and task processing capabilities while preserving its generalization capacity and knowledge reservoir. Extensive experiments showcase the impressive performance of our unified representation approach across various tasks, surpassing existing methodologies. Furthermore, our approach exhibits exceptional scalability and generality. Our code, model, and dataset will be available at \url{https://github.com/lzw-lzw/UnifiedMLLM}.
翻译:近年来,多模态大语言模型领域取得了显著进展,这些模型在理解和推理多样化任务方面展现出卓越能力。然而,这些模型通常针对特定任务进行训练,并依赖于任务特定的输入-输出格式,这限制了其在更广泛任务范围内的适用性。这引发了一个根本性问题:我们能否开发一种统一的方法来表示和处理不同的多模态任务,以最大化MLLMs的泛化能力?本文提出UnifiedMLLM,这是一个采用统一表示来表征各类任务的综合性模型。我们的模型在理解用户指令的隐含意图和执行推理方面表现出强大能力。除了生成文本响应外,模型还输出任务令牌和定位令牌,作为任务类型和任务粒度的指示器。这些输出随后通过任务路由器进行路由,并导向特定的专家模型以完成任务。为训练模型,我们构建了一个任务特定数据集和一个包含复杂场景的10万规模多任务数据集。通过采用三阶段训练策略,我们在保持模型泛化能力和知识储备的同时,赋予其强大的推理和任务处理能力。大量实验表明,我们的统一表示方法在各种任务上均展现出令人印象深刻的性能,超越了现有方法。此外,我们的方法表现出卓越的可扩展性和通用性。我们的代码、模型和数据集将在 \url{https://github.com/lzw-lzw/UnifiedMLLM} 公开。