Approximately 200 million individuals around the world suffer from varying degrees of visual impairment, making it crucial to leverage AI technology to offer walking assistance for these people. With the recent progress of vision-language models (VLMs), applying VLMs to offer walking guidance has become popular. However, the existing methods of walking guidance are mainly based on self-curated question-answering datasets that are not publicly accessible, without a standardized benchmark for training or evaluation. Moreover, walking assistance often requires real-time streaming video analysis and the generation of concise yet informative reminders, making VLMs struggle due to excessive responses and low efficiency in inferences. In this paper, we introduce the first large-scale dataset dedicated to walking assistance, comprising 12,000 video-annotation pairs, to provide a unified benchmark for training and evaluating systems to help visually-impaired individuals walk. Furthermore, a WalkVLM model is proposed, which employs chain of thought for hierarchical planning to generate concise but informative reminders and utilizes temporal-aware adaptive prediction to reduce the temporal redundancy of reminders. Finally, we have established a solid benchmark for blind walking task and verified the advantages of WalkVLM in stream video processing for this task compared to other VLMs. Our dataset and code are available at https://walkvlm2024.github.io.
翻译:全球约有2亿人患有不同程度的视力障碍,因此利用人工智能技术为这类人群提供行走辅助至关重要。随着视觉语言模型(VLMs)的最新进展,应用VLMs提供行走引导已成为研究热点。然而,现有的行走引导方法主要基于自行构建且未公开的问答数据集,缺乏用于训练或评估的标准化基准。此外,行走辅助通常需要实时流视频分析并生成简洁而信息丰富的提示,而现有VLMs往往因响应冗长和推理效率低下而难以胜任。本文首次引入了专用于行走辅助的大规模数据集,包含12,000个视频-标注对,为帮助视障人士行走的系统提供了统一的训练与评估基准。进一步,我们提出了WalkVLM模型,该模型采用思维链进行分层规划以生成简洁且信息丰富的提示,并利用时序感知自适应预测来降低提示的时间冗余度。最后,我们为盲人行走任务建立了坚实的基准,并验证了WalkVLM在此任务的流视频处理中相较于其他VLMs的优势。我们的数据集与代码已发布于https://walkvlm2024.github.io。