We propose LCLA (Language-Conditioned Latent Alignment), a framework for vision-language navigation that learns modular perception-action interfaces by aligning sensory observations to a latent representation of an expert policy. The expert is first trained with privileged state information, inducing a latent space sufficient for control, after which its latent interface and action head are frozen. A lightweight adapter is then trained to map raw visual-language observations, via a frozen vision-language model, into the expert's latent space, reducing the problem of visuomotor learning to supervised latent alignment rather than end-to-end policy optimization. This decoupling enforces a stable contract between perception and control, enabling expert behavior to be reused across sensing modalities and environmental variations. We instantiate LCLA and evaluate it on a vision-language indoor navigation task, where aligned latent spaces yield strong in-distribution performance and robust zero-shot generalization to unseen environments, lighting conditions, and viewpoints while remaining lightweight at inference time.
翻译:我们提出LCLA(语言条件化潜在对齐),一种用于视觉语言导航的框架,通过学习将感知观测与专家策略的潜在表征对齐,构建模块化的感知-行动接口。首先利用特权状态信息训练专家策略,诱导出足以支撑控制的潜在空间,随后冻结其潜在接口与行动头部。接着训练一个轻量级适配器,通过冻结的视觉语言模型将原始视觉-语言观测映射至专家潜在空间,从而将视觉运动学习问题简化为有监督的潜在对齐任务,而非端到端的策略优化。这种解耦机制在感知与控制之间建立了稳定的契约关系,使得专家行为能够跨感知模态与环境变化实现复用。我们在视觉语言室内导航任务中实例化LCLA并进行评估,实验表明对齐的潜在空间在分布内任务中表现优异,且能零样本泛化至未见过的环境、光照条件与观察视角,同时在推理阶段保持轻量化特性。