Efficient and privacy-preserving multimodal interaction is essential as AR, VR, and modern smartphones with powerful cameras become primary interfaces for human-computer communication. Existing powerful large vision-language models (VLMs) enabling multimodal interaction often rely on cloud-based processing, raising significant concerns about (1) visual privacy by transmitting sensitive vision data to servers, and (2) their limited real-time, on-device usability. This paper explores Visual Instruction Rewriting, a novel approach that transforms multimodal instructions into text-only commands, allowing seamless integration of lightweight on-device instruction rewriter VLMs (250M parameters) with existing conversational AI systems, enhancing vision data privacy. To achieve this, we present a dataset of over 39,000 examples across 14 domains and develop a compact VLM, pretrained on image captioning datasets and fine-tuned for instruction rewriting. Experimental results, evaluated through NLG metrics such as BLEU, METEOR, and ROUGE, along with semantic parsing analysis, demonstrate that even a quantized version of the model (<500MB storage footprint) can achieve effective instruction rewriting, thus enabling privacy-focused, multimodal AI applications.
翻译:随着增强现实(AR)、虚拟现实(VR)以及配备强大摄像头的现代智能手机逐渐成为人机交互的主要界面,高效且保护隐私的多模态交互变得至关重要。现有支持多模态交互的强大大型视觉语言模型(VLM)通常依赖基于云端的处理,这引发了两个重要问题:(1)通过向服务器传输敏感的视觉数据而导致的视觉隐私风险;(2)其有限的实时、设备端可用性。本文探索了视觉指令重写这一新方法,它将多模态指令转换为纯文本命令,从而允许轻量级的设备端指令重写VLM(2.5亿参数)与现有对话式AI系统无缝集成,并增强视觉数据的隐私性。为此,我们提出了一个涵盖14个领域、包含超过39,000个样本的数据集,并开发了一个紧凑的VLM。该模型在图像描述数据集上进行预训练,并针对指令重写任务进行微调。通过BLEU、METEOR和ROUGE等自然语言生成指标以及语义解析分析进行的实验评估表明,即使是量化后的模型版本(存储占用<500MB)也能实现有效的指令重写,从而为注重隐私的多模态AI应用提供了可能。