Despite the success of Large Vision--Language Models (LVLMs), most existing architectures suffer from a representation bottleneck: they rely on static, instruction-agnostic vision encoders whose visual representations are utilized in an invariant manner across different textual tasks. This rigidity hinders fine-grained reasoning where task-specific visual cues are critical. To address this issue, we propose iGVLM, a general framework for instruction-guided visual modulation. iGVLM introduces a decoupled dual-branch architecture: a frozen representation branch that preserves task-agnostic visual representations learned during pre-training, and a dynamic conditioning branch that performs affine feature modulation via Adaptive Layer Normalization (AdaLN). This design enables a smooth transition from general-purpose perception to instruction-aware reasoning while maintaining the structural integrity and stability of pre-trained visual priors. Beyond standard benchmarks, we introduce MM4, a controlled diagnostic probe for quantifying logical consistency under multi-query, multi-instruction settings. Extensive results show that iGVLM consistently enhances instruction sensitivity across diverse language backbones, offering a plug-and-play paradigm for bridging passive perception and active reasoning.
翻译:尽管大型视觉-语言模型(LVLMs)取得了成功,但现有架构大多存在表征瓶颈:它们依赖于静态的、与指令无关的视觉编码器,其视觉表征在不同文本任务中以固定不变的方式被利用。这种刚性阻碍了需要任务特定视觉线索的细粒度推理。为解决此问题,我们提出了iGVLM——一个指令引导视觉调制的通用框架。iGVLM采用解耦的双分支架构:一个冻结的表征分支用于保持预训练期间学习的任务无关视觉表征,另一个动态条件分支通过自适应层归一化(AdaLN)执行仿射特征调制。该设计实现了从通用感知到指令感知推理的平滑过渡,同时保持了预训练视觉先验的结构完整性和稳定性。除标准基准测试外,我们引入了MM4——一个用于量化多查询、多指令设置下逻辑一致性的受控诊断探针。大量实验结果表明,iGVLM能持续提升不同语言骨干网络的指令敏感性,为连接被动感知与主动推理提供了一种即插即用的范式。