Abstract
Gait recognition has been proved useful in human identification at a distance. But many variations such as view, clothing, carrying condition make gait recognition is still challenging in real applications. The variations make it is hard to extract invariant feature to distinguish different subjects. For view variation, one view transformation model can be employed to convert the gait feature from one view to another. Most existing models need to estimate the view angle first, and can work for only one view pair. They can not convert multi-view data to one specific view efficiently. Other variations also need some specific models to handle. We employed one deep model based on auto-encoder for invariant gait extraction. The model can synthesize gait feature in a progressive way by stacked multi-layer auto-encoders. The unique advantage is that it can extract invariant gait feature using only one model, and the extracted feature is robust to view, clothing and carrying condition variation. The proposed method is evaluated on two large gait datasets, CASIA Gait Dataset B and SZU RGB-D Gait Dataset. The experimental results show that the proposed method can achieve state-of-the-art performance by only one uniform model.
Original language | English |
---|---|
Pages (from-to) | 81-93 |
Number of pages | 13 |
Journal | Neurocomputing |
Volume | 239 |
DOIs | |
Publication status | Published - 24 May 2017 |
Externally published | Yes |
Keywords
- Deep learning
- Gait recognition
- Invariant feature
ASJC Scopus subject areas
- Computer Science Applications
- Cognitive Neuroscience
- Artificial Intelligence