We present KITE, a training-free, keyframe-anchored, layout-grounded front-end that converts long robot-execution videos into compact, interpretable tokenized evidence for vision-language models (VLMs). KITE distills each trajectory into a small set of motion-salient keyframes with open-vocabulary detections and pairs each keyframe with a schematic bird's-eye-view (BEV) representation that encodes relative object layout, axes, timestamps, and detection confidence. These visual cues are serialized with robot-profile and scene-context tokens into a unified prompt, allowing the same front-end to support failure detection, identification, localization, explanation, and correction with an off-the-shelf VLM. On the RoboFAC benchmark, KITE with Qwen2.5-VL substantially improves over vanilla Qwen2.5-VL in the training-free setting, with especially large gains on simulation failure detection, identification, and localization, while remaining competitive with a RoboFAC-tuned baseline. A small QLoRA fine-tune further improves explanation and correction quality. We also report qualitative results on real dual-arm robots, demonstrating the practical applicability of KITE as a structured and interpretable front-end for robot failure analysis. Code and models are released on our project page: https://m80hz.github.io/kite/
翻译:我们提出KITE,一种免训练、关键帧锚定、布局约束的前端模块,可将长时机器人执行视频转换为紧凑、可解释的标记化证据,供视觉语言模型(VLMs)使用。KITE将每条轨迹提炼为少量运动显著的关键帧,并附加开放词汇检测结果,每帧配有编码相对物体布局、坐标轴、时间戳及检测置信度的示意图鸟瞰视角(BEV)表示。这些视觉线索与机器人配置和场景上下文标记序列化为统一提示,使同一前端能直接利用现成VLM支持故障检测、识别、定位、解释与纠正。在RoboFAC基准测试中,KITE结合Qwen2.5-VL在免训练设置下较原始Qwen2.5-VL显著提升,尤其在仿真故障检测、识别与定位上优势明显,且与RoboFAC微调基线保持竞争力。轻量QLoRA微调进一步提升了解释与纠正质量。我们还在真实双臂机器人上报告了定性结果,展示了KITE作为机器人故障分析的结构化可解释前端的实际应用价值。代码与模型已发布于项目页面:https://m80hz.github.io/kite/