Efficient and privacy-preserving multimodal interaction is essential as AR, VR, and modern smartphones with powerful cameras become primary interfaces for human-computer communication. Existing powerful large vision-language models (VLMs) enabling multimodal interaction often rely on cloud-based processing, raising significant concerns about (1) visual privacy by transmitting sensitive vision data to servers, and (2) their limited real-time, on-device usability. This paper explores Visual Instruction Rewriting, a novel approach that transforms multimodal instructions into text-only commands, allowing seamless integration of lightweight on-device instruction rewriter VLMs (250M parameters) with existing conversational AI systems, enhancing vision data privacy. To achieve this, we present a dataset of over 39,000 examples across 14 domains and develop a compact VLM, pretrained on image captioning datasets and fine-tuned for instruction rewriting. Experimental results, evaluated through NLG metrics such as BLEU, METEOR, and ROUGE, along with semantic parsing analysis, demonstrate that even a quantized version of the model (<500MB storage footprint) can achieve effective instruction rewriting, thus enabling privacy-focused, multimodal AI applications.
翻译:随着增强现实(AR)、虚拟现实(VR)以及配备高性能摄像头的现代智能手机成为人机交互的主要界面,高效且保护隐私的多模态交互变得至关重要。现有支持多模态交互的强大大规模视觉语言模型(VLM)通常依赖云端处理,这引发了严重关切:(1)将敏感视觉数据传输至服务器带来的视觉隐私风险;(2)其有限的实时性及在设备端部署的可行性。本文探索了视觉指令重写这一新方法,它将多模态指令转换为纯文本命令,使得轻量级设备端指令重写VLM(2.5亿参数)能够与现有对话式AI系统无缝集成,从而增强视觉数据的隐私保护。为此,我们构建了一个涵盖14个领域、超过39,000个样本的数据集,并开发了一个紧凑型VLM。该模型在图像描述数据集上进行预训练,并针对指令重写任务进行微调。实验通过BLEU、METEOR、ROUGE等自然语言生成指标以及语义解析分析进行评估,结果表明,即使是量化后的模型(存储占用<500MB)也能实现有效的指令重写,从而为注重隐私的多模态AI应用提供了可行方案。