The rapid evolution of embodied agents has accelerated the deployment of household robots in real-world environments. However, unlike structured industrial settings, household spaces introduce unpredictable safety risks, where system limitations such as perception latency and lack of common sense knowledge can lead to dangerous errors. Current safety evaluations, often restricted to static images, text, or general hazards, fail to adequately benchmark dynamic unsafe action detection in these specific contexts. To bridge this gap, we introduce \textbf{HomeSafe-Bench}, a challenging benchmark designed to evaluate Vision-Language Models (VLMs) on unsafe action detection in household scenarios. HomeSafe-Bench is contrusted via a hybrid pipeline combining physical simulation with advanced video generation and features 438 diverse cases across six functional areas with fine-grained multidimensional annotations. Beyond benchmarking, we propose \textbf{Hierarchical Dual-Brain Guard for Household Safety (HD-Guard)}, a hierarchical streaming architecture for real-time safety monitoring. HD-Guard coordinates a lightweight FastBrain for continuous high-frequency screening with an asynchronous large-scale SlowBrain for deep multimodal reasoning, effectively balancing inference efficiency with detection accuracy. Evaluations demonstrate that HD-Guard achieves a superior trade-off between latency and performance, while our analysis identifies critical bottlenecks in current VLM-based safety detection.
翻译:具身智能体的快速发展加速了家用机器人在现实环境中的部署。然而,与结构化的工业环境不同,家庭空间引入了不可预测的安全风险,其中感知延迟和常识知识缺乏等系统局限性可能导致危险的错误。当前的安全评估通常局限于静态图像、文本或一般性危险,未能充分衡量这些特定场景下的动态不安全动作检测能力。为弥补这一差距,我们引入了 \textbf{HomeSafe-Bench},这是一个旨在评估视觉语言模型在家庭场景中不安全动作检测能力的挑战性基准。HomeSafe-Bench 通过结合物理仿真与先进视频生成的混合流程构建,包含六个功能区域的 438 个多样化案例,并配有细粒度的多维度标注。除了基准测试,我们提出了 \textbf{面向家庭安全的分层双脑守护系统},这是一种用于实时安全监控的分层流式架构。HD-Guard 协调一个轻量级的 FastBrain 进行连续高频筛查,并与一个异步的大规模 SlowBrain 进行深度多模态推理,有效平衡了推理效率与检测精度。评估表明,HD-Guard 在延迟与性能之间实现了更优的权衡,而我们的分析则揭示了当前基于 VLM 的安全检测中的关键瓶颈。