Vision-Language Models (VLMs) create a severe visual feature bottleneck by using a crude, asymmetric connection that links only the output of the vision encoder to the input of the large language model (LLM). This static architecture fundamentally limits the ability of LLMs to achieve comprehensive alignment with hierarchical visual knowledge, compromising their capacity to accurately integrate local details with global semantics into coherent reasoning. To resolve this, we introduce Cross-Layer Injection (CLI), a novel and lightweight framework that forges a dynamic many-to-many bridge between the two modalities. CLI consists of two synergistic, parameter-efficient components: an Adaptive Multi-Projection (AMP) module that harmonizes features from diverse vision layers, and an Adaptive Gating Fusion (AGF) mechanism that empowers the LLM to selectively inject the most relevant visual information based on its real-time decoding context. We validate the effectiveness and versatility of CLI by integrating it into LLaVA-OneVision and LLaVA-1.5. Extensive experiments on 18 diverse benchmarks demonstrate significant performance improvements, establishing CLI as a scalable paradigm that unlocks deeper multimodal understanding by granting LLMs on-demand access to the full visual hierarchy.
翻译:视觉-语言模型(VLMs)通过一种粗糙且非对称的连接方式——仅将视觉编码器的输出链接至大语言模型(LLM)的输入——造成了严重的视觉特征瓶颈。这种静态架构从根本上限制了LLM实现与层次化视觉知识全面对齐的能力,损害了其将局部细节与全局语义准确整合为连贯推理的能力。为解决此问题,我们提出了跨层注入(CLI),一种新颖且轻量级的框架,可在两种模态间构建动态的多对多桥梁。CLI由两个协同工作、参数高效的组件构成:一个自适应多投影(AMP)模块,用于协调来自不同视觉层的特征;以及一个自适应门控融合(AGF)机制,使LLM能够根据其实时解码上下文有选择性地注入最相关的视觉信息。我们通过将CLI集成到LLaVA-OneVision和LLaVA-1.5中,验证了其有效性和通用性。在18个多样化基准测试上的大量实验表明,性能获得显著提升,确立了CLI作为一种可扩展的范式,通过赋予LLM按需访问完整视觉层次的能力,解锁了更深层次的多模态理解。