Flow-based vision-language-action (VLA) models excel in embodied control but suffer from intractable likelihoods during multi-step sampling, hindering online reinforcement learning. We propose \textbf{\textit{$\boldsymbolπ$-StepNFT}} (Step-wise Negative-aware Fine-Tuning), a critic-and-likelihood-free framework that requires only a single forward pass per optimization step and eliminates auxiliary value networks. We identify that wider exploration spaces necessitate finer-grained, step-wise guidance for alignment. Empirically, $π$-StepNFT unlocks latent potential on LIBERO with competitive few-shot robustness. Moreover, it achieves superior generalization on ManiSkill, outperforming value-based baselines in OOD scenarios by preventing overfitting to multimodal features. This property offers a scalable solution promising for complex real-world applications.
翻译:基于流的视觉语言动作(VLA)模型在具身控制任务中表现优异,但在多步采样过程中存在难以处理的似然计算问题,阻碍了在线强化学习的应用。本文提出\textbf{\textit{$\boldsymbolπ$-StepNFT}}(步进式负感知微调框架),该框架无需价值网络辅助,且每个优化步骤仅需单次前向传播,实现了免评论家与免似然估计的优化。我们发现,更广阔的探索空间需要更细粒度的步进式指导以实现策略对齐。实验表明,π-StepNFT在LIBERO基准上展现出具有竞争力的少样本鲁棒性,释放了模型的潜在性能。此外,该方法在ManiSkill任务中实现了卓越的泛化能力,通过避免对多模态特征的过拟合,在分布外场景中超越了基于价值函数的基线方法。这一特性为复杂现实世界应用提供了可扩展的解决方案。