Human gait recognition is crucial in multimedia, enabling identification through walking patterns without direct interaction, enhancing the integration across various media forms in real-world applications like smart homes, healthcare and non-intrusive security. LiDAR's ability to capture depth makes it pivotal for robotic perception and holds promise for real-world gait recognition. In this paper, based on a single LiDAR, we present the Hierarchical Multi-representation Feature Interaction Network (HMRNet) for robust gait recognition. Prevailing LiDAR-based gait datasets primarily derive from controlled settings with predefined trajectory, remaining a gap with real-world scenarios. To facilitate LiDAR-based gait recognition research, we introduce FreeGait, a comprehensive gait dataset from large-scale, unconstrained settings, enriched with multi-modal and varied 2D/3D data. Notably, our approach achieves state-of-the-art performance on prior dataset (SUSTech1K) and on FreeGait. Code and dataset will be released upon publication of this paper.
翻译:人类步态识别在多媒体领域中至关重要,它能够通过行走模式实现无直接交互的个体身份识别,从而在智慧家居、医疗健康及非侵入式安防等实际应用中增强多种媒体形式的集成能力。激光雷达凭借其深度信息捕捉能力,成为机器人感知的关键技术,并为真实场景下的步态识别提供了重要潜力。本文基于单激光雷达,提出了层级多表示特征交互网络(HMRNet),以实现鲁棒的步态识别。现有基于激光雷达的步态数据集主要来源于具有预设轨迹的受控环境,与真实场景之间存在差距。为促进基于激光雷达的步态识别研究,我们构建了FreeGait——一个来自大规模无约束环境的综合步态数据集,其包含多模态及多样化的二维/三维数据。值得注意的是,本方法在既有数据集(SUSTech1K)与FreeGait上均取得了当前最优性能。相关代码与数据集将在论文发表后公开。