Egocentric video understanding is inherently complex due to the dynamic 4D nature of the environment, where camera motion and object displacements necessitate a continuous re-evaluation of spatial relations. In this work, we target a suite of under-explored egocentric 4D reasoning tasks, including fixture interaction counting, viewpoint-relative fixture location, object movement itinerary tracking, and stationary object localization, that require fundamentally different cognitive operations: spatial anchoring, temporal tracking, and duration reasoning. We observe that these structural differences make task-agnostic approaches insufficient: generic Chain-of-Thought methods lack task-appropriate reasoning primitives, and uniform reinforcement learning actively destabilizes performance on spatial tasks. To address this, we propose EgoReasoner, a two-stage framework that aligns both the reasoning scaffold and the reward signal to each task's cognitive structure. In the first stage, Task-Adaptive Thinking Templates guide the synthesis of structured CoT traces that teach the model to reason adaptively across task types via supervised fine-tuning. In the second stage, task-aware reward functions verify entity grounding, temporal alignment, and task-adaptive logical consistency, selectively strengthening each reasoning pathway via reinforcement fine-tuning with GRPO. Our 3B-parameter model, trained on only 16K samples, achieves 37.5% average accuracy on the challenging HD-EPIC benchmark, surpassing Qwen2.5-VL-7B (25.7%) by over 10 points.
翻译:以自我为中心的视频理解因其环境的动态4D特性而本质复杂,其中相机运动和物体位移需要对空间关系进行持续重新评估。在本工作中,我们针对一系列尚未充分探索的以自我为中心的4D推理任务,包括固定装置交互计数、视角相对固定装置定位、物体移动行程跟踪以及静止物体定位,这些任务需要根本不同的认知操作:空间锚定、时间追踪和持续时间推理。我们观察到,这些结构差异使得任务无关的方法不足:通用的思维链方法缺乏任务适配的推理基元,而统一的强化学习会主动破坏空间任务的性能。为解决此问题,我们提出了EgoReasoner,一个两阶段框架,该框架将推理支架和奖励信号与每个任务的认知结构对齐。在第一阶段,任务自适应思维模板指导结构化思维链轨迹的合成,通过监督微调教导模型跨任务类型进行自适应推理。在第二阶段,任务感知的奖励函数验证实体接地、时间对齐和任务自适应逻辑一致性,通过使用GRPO的强化微调有选择地强化每条推理路径。我们的30亿参数模型,仅用16K样本训练,在具有挑战性的HD-EPIC基准测试中达到了37.5%的平均准确率,超越了Qwen2.5-VL-7B(25.7%)超过10个百分点。