Deep neural network (DNN)-based policy models like vision-language-action (VLA) models are transformative in automating complex decision-making across applications by interpreting multi-modal data. However, scaling these models greatly increases computational costs, which presents challenges in fields like robot manipulation and autonomous driving that require quick, accurate responses. To address the need for deployment on resource-limited hardware, we propose a new quantization framework for IL-based policy models that fine-tunes parameters to enhance robustness against low-bit precision errors during training, thereby maintaining efficiency and reliability under constrained conditions. Our evaluations with representative robot manipulation for 4-bit weight-quantization on a real edge GPU demonstrate that our framework achieves up to 2.5x speedup and 2.5x energy savings while preserving accuracy. For 4-bit weight and activation quantized self-driving models, the framework achieves up to 3.7x speedup and 3.1x energy saving on a low-end GPU. These results highlight the practical potential of deploying IL-based policy models on resource-constrained devices.
翻译:基于深度神经网络(DNN)的策略模型,如视觉-语言-动作(VLA)模型,通过解析多模态数据,在自动化复杂决策方面具有变革性。然而,扩展这些模型会显著增加计算成本,这对机器人操控和自动驾驶等需要快速、准确响应的领域提出了挑战。为了满足在资源受限硬件上部署的需求,我们提出了一种新的基于模仿学习(IL)策略模型的量化框架。该框架通过微调参数来增强模型在训练期间对低比特精度误差的鲁棒性,从而在受限条件下保持效率和可靠性。我们在真实边缘GPU上对具有代表性的机器人操控任务进行4位权重量化的评估表明,我们的框架在保持精度的同时,实现了高达2.5倍的加速和2.5倍的节能。对于4位权重和激活量化的自动驾驶模型,该框架在低端GPU上实现了高达3.7倍的加速和3.1倍的节能。这些结果突显了在资源受限设备上部署基于IL的策略模型的实际潜力。