Multi-modal large language models (MLLMs) have achieved remarkable success in fine-grained visual understanding across a range of tasks. However, they often encounter significant challenges due to inadequate alignment for fine-grained knowledge, which restricts their ability to accurately capture local details and attain a comprehensive global perception. While recent advancements have focused on aligning object expressions with grounding information, they typically lack explicit integration of object images, which contain affluent information beyond mere texts or coordinates. To bridge this gap, we introduce a novel fine-grained visual knowledge alignment method that effectively aligns and integrates multi-scale knowledge of objects, including texts, coordinates, and images. This innovative method is underpinned by our multi-scale fine-grained enhancement data synthesis pipeline, which provides over 300K essential training data to enhance alignment and improve overall performance. Furthermore, we present TinyGroundingGPT, a series of compact models optimized for high-level alignments. With a scale of approximately 3B parameters, TinyGroundingGPT achieves outstanding results in grounding tasks while delivering performance comparable to larger MLLMs in complex visual scenarios.
翻译:多模态大语言模型(MLLMs)在一系列细粒度视觉理解任务中取得了显著成功。然而,由于细粒度知识对齐不足,它们常常面临重大挑战,这限制了其准确捕捉局部细节和获得全面全局感知的能力。尽管近期进展主要集中在将物体表达与定位信息对齐,但这些方法通常缺乏对物体图像的显式整合,而图像所包含的信息远不止于文本或坐标。为弥补这一差距,我们提出了一种新颖的细粒度视觉知识对齐方法,该方法能有效对齐并整合物体的多尺度知识,包括文本、坐标和图像。这一创新方法基于我们提出的多尺度细粒度增强数据合成流程,该流程提供了超过30万条关键训练数据,以增强对齐并提升整体性能。此外,我们推出了TinyGroundingGPT,这是一系列为高级对齐任务优化的紧凑模型。该系列模型参数量约为30亿,在定位任务中取得了优异结果,同时在复杂视觉场景中实现了与更大规模MLLMs相媲美的性能。