The recent development on large multimodal models (LMMs), especially GPT-4V(ision) and Gemini, has been quickly expanding the capability boundaries of multimodal models beyond traditional tasks like image captioning and visual question answering. In this work, we explore the potential of LMMs like GPT-4V as a generalist web agent that can follow natural language instructions to complete tasks on any given website. We propose SEEACT, a generalist web agent that harnesses the power of LMMs for integrated visual understanding and acting on the web. We evaluate on the recent MIND2WEB benchmark. In addition to standard offline evaluation on cached websites, we enable a new online evaluation setting by developing a tool that allows running web agents on live websites. We show that GPT-4V presents a great potential for web agents -- it can successfully complete 51.1 of the tasks on live websites if we manually ground its textual plans into actions on the websites. This substantially outperforms text-only LLMs like GPT-4 or smaller models (FLAN-T5 and BLIP-2) specifically fine-tuned for web agents. However, grounding still remains a major challenge. Existing LMM grounding strategies like set-of-mark prompting turns out to be not effective for web agents, and the best grounding strategy we develop in this paper leverages both the HTML structure and visuals. Yet, there is still a substantial gap with oracle grounding, leaving ample room for further improvement. All code, data, and evaluation tools are available at https://github.com/OSU-NLP-Group/SeeAct.
翻译:大型多模态模型(LMM)的最新进展,特别是GPT-4V(ision)和Gemini,正迅速将多模态模型的能力边界拓展至传统任务(如图像描述和视觉问答)之外。本研究探索了GPT-4V等LMM作为通用网络智能体的潜力——该类智能体可遵循自然语言指令在任意网站上完成任务。我们提出SEEACT,一种利用LMM实现网页视觉理解与操作的通用网络智能体。在最新MIND2WEB基准上的评估中,除标准离线缓存网站评估外,我们通过开发工具实现了网络智能体在实时网站上的在线评估新范式。研究表明,若将GPT-4V的文本化计划手动接地为网页操作,其在实时网站上可成功完成51.1%的任务,显著优于文本型LLM(如GPT-4)或专为网络智能体微调的小模型(FLAN-T5和BLIP-2)。然而,接地仍面临重大挑战:现有的LMM接地策略(如标记集合提示)对网络智能体效果不佳,我们开发的最佳接地策略需同时利用HTML结构与视觉信息。尽管如此,该方法与理想接地之间仍存在显著差距,为后续改进留有充足空间。所有代码、数据和评估工具均发布于https://github.com/OSU-NLP-Group/SeeAct。