In robotic manipulation, vision-language-action (VLA) models have emerged as a promising paradigm for learning generalizable and scalable robot policies. Most existing VLA frameworks rely on standard supervised objectives, typically cross-entropy for discrete actions and mean squared error (MSE) for continuous action regression, which impose strong pointwise constraints on individual predictions. In this work, we focus on continuous-action VLA models and move beyond conventional MSE-based regression by reshaping action error distributions during training. Drawing on information-theoretic principles, we introduce Minimum Error Entropy (MEE) into modern VLA architectures and propose a trajectory-level MEE objective, together with two weighted variants, combined with MSE for continuous-action VLA training. We evaluate our approaches across standard, few-shot, and noisy settings on multiple representative VLA architectures, using simulation benchmarks such as LIBERO and SimplerEnv as well as real-world robotic manipulation tasks. Experimental results demonstrate consistent improvements in success rates and robustness across these settings. Under imbalanced data regimes, the gains persist within a well-characterized operating range, while incurring negligible additional training cost and no impact on inference efficiency. We further provide theoretical analyses that explain why MEE-based supervision is effective and characterize its practical range. Project Page: https://cognition2actionlab.github.io/VLA-TMEE.github.io/
翻译:在机器人操作领域,视觉-语言-动作(VLA)模型已成为学习泛化性强、可扩展机器人策略的一种有前景的范式。大多数现有VLA框架依赖于标准的监督目标,通常是针对离散动作的交叉熵和针对连续动作回归的均方误差(MSE),这些目标对单个预测施加了强烈的逐点约束。本文聚焦于连续动作的VLA模型,通过重塑训练过程中的动作误差分布,超越了传统的基于MSE的回归方法。借鉴信息论原理,我们将最小误差熵(MEE)引入现代VLA架构,并提出了一种轨迹级的MEE目标,以及两个加权变体,与MSE相结合用于连续动作VLA模型的训练。我们在标准、少样本和噪声设置下,对多种代表性VLA架构进行了评估,使用的仿真基准包括LIBERO和SimplerEnv,以及真实世界的机器人操作任务。实验结果表明,在这些设置下,模型的成功率和鲁棒性均得到了一致的提升。在数据不平衡的情况下,增益在一个特征明确的运行范围内持续存在,同时仅带来可忽略的额外训练成本,且不影响推理效率。我们进一步提供了理论分析,解释了基于MEE的监督为何有效,并刻画了其实际适用范围。项目页面:https://cognition2actionlab.github.io/VLA-TMEE.github.io/