Despite recent advancements, video captioning models still face significant limitations in accurately describing fine-grained motion details and suffer from severe hallucination issues. These challenges become particularly prominent when generating captions for motion-centric videos, where precise depiction of intricate movements and limb dynamics is crucial yet often neglected. To alleviate this gap, we introduce an automated annotation pipeline that integrates kinematic-based motion computation with linguistic parsing, enabling detailed decomposition and description of complex human motions. Based on this pipeline, we construct and release the Kinematic Parsing Motion Benchmark (KPM-Bench), a novel open-source dataset designed to facilitate fine-grained motion understanding. KPM-Bench consists of (i) fine-grained video-caption pairs that comprehensively illustrate limb-level dynamics in complex actions, (ii) diverse and challenging question-answer pairs focusing specifically on motion understanding, and (iii) a meticulously curated evaluation set specifically designed to assess hallucination phenomena associated with motion descriptions. Furthermore, to address hallucination issues systematically, we propose the linguistically grounded Motion Parsing and Extraction (MoPE) algorithm, capable of accurately extracting motion-specific attributes directly from textual captions. Leveraging MoPE, we introduce a precise hallucination evaluation metric that functions independently of large-scale vision-language or language-only models. By integrating MoPE into the GRPO post-training framework, we effectively mitigate hallucination problems, significantly improving the reliability of motion-centric video captioning models.
翻译:尽管近期取得了进展,视频描述模型在准确描述细粒度运动细节方面仍面临显著限制,并受到严重幻觉问题的困扰。这些挑战在生成以运动为中心的视频描述时尤为突出,其中对复杂动作和肢体动态的精确描绘至关重要,却常被忽视。为弥补这一差距,我们引入了一种自动化标注流程,该流程将基于运动学的运动计算与语言解析相结合,从而能够对复杂人体运动进行详细分解与描述。基于此流程,我们构建并发布了运动学解析运动基准(KPM-Bench),这是一个旨在促进细粒度运动理解的新型开源数据集。KPM-Bench包含:(i)全面展示复杂动作中肢体层面动态的细粒度视频-描述对;(ii)专门针对运动理解设计且多样化的挑战性问答对;(iii)精心策划的评估集,专门用于评估与运动描述相关的幻觉现象。此外,为系统性地解决幻觉问题,我们提出了基于语言学的运动解析与提取(MoPE)算法,该算法能够直接从文本描述中准确提取运动特定属性。利用MoPE,我们引入了一种精确的幻觉评估指标,该指标无需依赖大规模视觉-语言或纯语言模型即可独立运作。通过将MoPE集成到GRPO后训练框架中,我们有效缓解了幻觉问题,显著提高了以运动为中心的视频描述模型的可靠性。