Existing video large language models (VLLMs) primarily leverage prompt agnostic visual encoders, which extract untargeted facial representations without awareness of the queried information, leading to the loss of task critical cues. To address this challenge, we propose FaVChat, the first VLLM designed for reasoning over subtle visual and dynamic facial cues. FaVChat introduces a hierarchical, prompt guided visual feature extraction framework that emphasizes question relevant information at three complementary levels. These multi level features are dynamically fused and injected into the LLM, enabling more accurate facial details reasoning To further improve learning efficiency under data scarcity, we propose Data Efficient GRPO, a reinforcement learning strategy that iteratively identifies high utility samples and maximizes the contribution of each instance via per instance utility estimation, substantially enhancing performance gains under limited supervision. We construct a large scale benchmark dataset FaVChat 170K, comprising approximately 60K high quality facial videos and 170K question answer pairs focusing on fine grained facial details. Extensive experiments, including zero shot evaluations on four facial understanding tasks, demonstrate that FaVChat consistently outperforms existing VLLMs.
翻译:现有视频大语言模型主要采用提示无关的视觉编码器,这些编码器提取无目标的面部表征时未考虑查询信息,导致任务关键线索的丢失。为应对这一挑战,我们提出FaVChat——首个专为推理细微视觉与动态面部线索而设计的视频大语言模型。FaVChat引入分层式提示引导的视觉特征提取框架,在三个互补层级上强化问题相关信息。这些多层级特征经动态融合后注入大语言模型,从而实现更精确的面部细节推理。为在数据稀缺条件下进一步提升学习效率,我们提出数据高效GRPO强化学习策略,该策略通过逐样本效用估计迭代识别高价值样本并最大化每个实例的贡献,在有限监督下显著提升性能增益。我们构建了大规模基准数据集FaVChat 170K,包含约6万个高质量面部视频及17万个专注于细粒度面部细节的问答对。在四项面部理解任务上的零样本评估等大量实验表明,FaVChat始终优于现有视频大语言模型。