Real-time multimodal auto-completion is essential for digital assistants, chatbots, design tools, and healthcare consultations, where user inputs rely on shared visual context. We introduce Multimodal Auto-Completion (MAC), a task that predicts upcoming characters in live chats using partially typed text and visual cues. Unlike traditional text-only auto-completion (TAC), MAC grounds predictions in multimodal context to better capture user intent. To enable this task, we adapt MMDialog and ImageChat to create benchmark datasets. We evaluate leading vision-language models (VLMs) against strong textual baselines, highlighting trade-offs in accuracy and efficiency. We present Router-Suggest, a router framework that dynamically selects between textual models and VLMs based on dialog context, along with a lightweight variant for resource-constrained environments. Router-Suggest achieves a 2.3x to 10x speedup over the best-performing VLM. A user study shows that VLMs significantly excel over textual models on user satisfaction, notably saving user typing effort and improving the quality of completions in multi-turn conversations. These findings underscore the need for multimodal context in auto-completions, leading to smarter, user-aware assistants.
翻译:实时多模态自动补全对于数字助手、聊天机器人、设计工具和医疗咨询至关重要,这些场景中用户输入依赖于共享的视觉上下文。我们提出了多模态自动补全任务,该任务利用部分已输入文本和视觉线索来预测实时对话中的后续字符。与传统的纯文本自动补全不同,MAC将预测基于多模态上下文,以更好地捕捉用户意图。为支持此任务,我们改编了MMDialog和ImageChat以创建基准数据集。我们评估了领先的视觉语言模型与强文本基线的性能,揭示了准确性与效率之间的权衡。我们提出了Router-Suggest,这是一个路由器框架,可根据对话上下文动态选择文本模型或VLM,并提供了一个适用于资源受限环境的轻量级变体。Router-Suggest相比性能最佳的VLM实现了2.3倍至10倍的加速。一项用户研究表明,在多轮对话中,VLM在用户满意度方面显著优于文本模型,特别是在节省用户输入努力和提高补全质量方面。这些发现强调了自动补全中多模态上下文的必要性,从而催生更智能、更具用户感知能力的助手。