Gait datasets are essential for gait research. However, this paper observes that present benchmarks, whether conventional constrained or emerging real-world datasets, fall short regarding covariate diversity. To bridge this gap, we undertake an arduous 20-month effort to collect a cross-covariate gait recognition (CCGR) dataset. The CCGR dataset has 970 subjects and about 1.6 million sequences; almost every subject has 33 views and 53 different covariates. Compared to existing datasets, CCGR has both population and individual-level diversity. In addition, the views and covariates are well labeled, enabling the analysis of the effects of different factors. CCGR provides multiple types of gait data, including RGB, parsing, silhouette, and pose, offering researchers a comprehensive resource for exploration. In order to delve deeper into addressing cross-covariate gait recognition, we propose parsing-based gait recognition (ParsingGait) by utilizing the newly proposed parsing data. We have conducted extensive experiments. Our main results show: 1) Cross-covariate emerges as a pivotal challenge for practical applications of gait recognition. 2) ParsingGait demonstrates remarkable potential for further advancement. 3) Alarmingly, existing SOTA methods achieve less than 43% accuracy on the CCGR, highlighting the urgency of exploring cross-covariate gait recognition. Link: https://github.com/ShinanZou/CCGR.
翻译:步态数据集对于步态研究至关重要。然而,本文指出现有基准数据集——无论是传统受控环境下的数据集还是新兴的真实世界数据集——在协变量多样性方面均存在不足。为填补这一空白,我们历时20个月艰辛工作,收集了跨协变量步态识别(CCGR)数据集。该数据集包含970名受试者及约160万条序列,每个受试者几乎均涵盖33个视角和53种不同协变量。与现有数据集相比,CCGR兼具群体层面与个体层面的多样性。此外,其视角和协变量标注清晰,可支撑不同因素影响的分析。CCGR提供多种类型的步态数据,包括RGB、解析图、剪影和姿态数据,为研究者提供了全面探索资源。为深入解决跨协变量步态识别问题,我们利用新提出的解析数据提出基于解析图的步态识别方法(ParsingGait)。通过大量实验,主要结果表明:1)跨协变量问题成为步态识别实际应用的关键挑战;2)ParsingGait展现出进一步发展的显著潜力;3)令人警惕的是,现有最先进方法在CCGR上的准确率不足43%,凸显了探索跨协变量步态识别的紧迫性。链接:https://github.com/ShinanZou/CCGR