The rapid evolution of embodied agents has accelerated the deployment of household robots in real-world environments. However, unlike structured industrial settings, household spaces introduce unpredictable safety risks, where system limitations such as perception latency and lack of common sense knowledge can lead to dangerous errors. Current safety evaluations, often restricted to static images, text, or general hazards, fail to adequately benchmark dynamic unsafe action detection in these specific contexts. To bridge this gap, we introduce HomeSafe-Bench, a challenging benchmark designed to evaluate Vision-Language Models (VLMs) on unsafe action detection in household scenarios. HomeSafe-Bench is contrusted via a hybrid pipeline combining physical simulation with advanced video generation and features 438 diverse cases across six functional areas with fine-grained multidimensional annotations. Beyond benchmarking, we propose Hierarchical Dual-Brain Guard for Household Safety (HD-Guard), a hierarchical streaming architecture for real-time safety monitoring. HD-Guard coordinates a lightweight FastBrain for continuous high-frequency screening with an asynchronous large-scale SlowBrain for deep multimodal reasoning, effectively balancing inference efficiency with detection accuracy. Evaluations demonstrate that HD-Guard achieves a superior trade-off between latency and performance, while our analysis identifies critical bottlenecks in current VLM-based safety detection.
翻译:具身智能体的快速发展加速了家用机器人在现实环境中的部署。然而,与结构化的工业环境不同,家庭空间引入了不可预测的安全风险,其中感知延迟和常识知识缺乏等系统限制可能导致危险错误。当前的安全评估通常局限于静态图像、文本或一般性危险,未能充分衡量这些特定情境下动态不安全动作的检测能力。为弥补这一差距,我们引入了HomeSafe-Bench,这是一个旨在评估视觉语言模型在家庭场景中不安全动作检测能力的挑战性基准。HomeSafe-Bench通过结合物理仿真与先进视频生成的混合流程构建而成,包含六个功能区域的438个多样化案例,并配有细粒度的多维标注。除基准测试外,我们提出了用于家庭安全的分层双脑监护系统(HD-Guard),这是一种用于实时安全监控的分层流式架构。HD-Guard协调一个轻量级的快速脑进行连续高频筛查,以及一个异步的大规模慢速脑进行深度多模态推理,有效平衡了推理效率与检测精度。评估表明,HD-Guard在延迟与性能之间实现了更优的权衡,同时我们的分析揭示了当前基于视觉语言模型的安全检测所面临的关键瓶颈。