Understanding freely moving animal behavior is central to neuroscience, where pose estimation and behavioral understanding form the foundation for linking neural activity to natural actions. Yet both tasks still depend heavily on human annotation or unstable unsupervised pipelines, limiting scalability and reproducibility. We present BehaviorVLM, a unified vision-language framework for pose estimation and behavioral understanding that requires no task-specific finetuning and minimal human labeling by guiding pretrained Vision-Language Models (VLMs) through detailed, explicit, and verifiable reasoning steps. For pose estimation, we leverage quantum-dot-grounded behavioral data and propose a multi-stage pipeline that integrates temporal, spatial, and cross-view reasoning. This design greatly reduces human annotation effort, exposes low-confidence labels through geometric checks such as reprojection error, and produces labels that can later be filtered, corrected, or used to fine-tune downstream pose models. For behavioral understanding, we propose a pipeline that integrates deep embedded clustering for over-segmented behavior discovery, VLM-based per-clip video captioning, and LLM-based reasoning to merge and semantically label behavioral segments. The behavioral pipeline can operate directly from visual information and does not require keypoints to segment behavior. Together, these components enable scalable, interpretable, and label-light analysis of multi-animal behavior.
翻译:理解自由活动动物的行为是神经科学的核心任务,其中姿态估计与行为理解构成了连接神经活动与自然行为的基础。然而这两项任务目前仍严重依赖人工标注或不稳定的无监督流程,限制了可扩展性与可复现性。本文提出BehaviorVLM,这是一个用于姿态估计与行为理解的统一视觉语言框架,通过引导预训练视觉语言模型(VLMs)执行详细、明确且可验证的推理步骤,无需任务特定微调且仅需极少人工标注。在姿态估计方面,我们利用量子点标记的行为数据,提出融合时序、空间及跨视角推理的多阶段流程。该设计大幅减少了人工标注工作量,通过重投影误差等几何检查揭示低置信度标签,并生成可用于后续筛选、校正或微调下游姿态模型的标注数据。在行为理解方面,我们提出集成深度嵌入聚类进行过分割行为发现、基于VLM的单片段视频描述生成以及基于LLM的推理来合并并语义标注行为片段的流程。该行为分析流程可直接基于视觉信息运行,无需依赖关键点进行行为分割。这些组件共同实现了多动物行为可扩展、可解释且低标注依赖的分析。