Invariant feature extraction for gait recognition using only one uniform model

Shiqi Yu, Haifeng Chen, Qing Wang, Linlin Shen, Yongzhen Huang

Research output: Journal PublicationArticlepeer-review

190 Citations (Scopus)

Abstract

Gait recognition has been proved useful in human identification at a distance. But many variations such as view, clothing, carrying condition make gait recognition is still challenging in real applications. The variations make it is hard to extract invariant feature to distinguish different subjects. For view variation, one view transformation model can be employed to convert the gait feature from one view to another. Most existing models need to estimate the view angle first, and can work for only one view pair. They can not convert multi-view data to one specific view efficiently. Other variations also need some specific models to handle. We employed one deep model based on auto-encoder for invariant gait extraction. The model can synthesize gait feature in a progressive way by stacked multi-layer auto-encoders. The unique advantage is that it can extract invariant gait feature using only one model, and the extracted feature is robust to view, clothing and carrying condition variation. The proposed method is evaluated on two large gait datasets, CASIA Gait Dataset B and SZU RGB-D Gait Dataset. The experimental results show that the proposed method can achieve state-of-the-art performance by only one uniform model.

Original languageEnglish
Pages (from-to)81-93
Number of pages13
JournalNeurocomputing
Volume239
DOIs
Publication statusPublished - 24 May 2017
Externally publishedYes

Keywords

  • Deep learning
  • Gait recognition
  • Invariant feature

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Invariant feature extraction for gait recognition using only one uniform model'. Together they form a unique fingerprint.

Cite this