TY - GEN
T1 - View invariant gait recognition using only one uniform model
AU - Yu, Shiqi
AU - Wang, Qing
AU - Shen, Linlin
AU - Huang, Yongzhen
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/1/1
Y1 - 2016/1/1
N2 - Gait recognition has been proved useful in human identification at a distance. But view variance of gait feature is always a great challenge because of the difference in appearance. If the view of the probe is different from that of the gallery, one view transformation model can be employed to convert the gait feature from one view to another. But most existing models need to estimate the view angle first, and can work for only one view pair. They can not convert multi-view data to one specific view efficiently. We employ one deep model based on auto-encoder for view invariant gait extraction. The model can synthesize gait feature in a progressive way by stacked multi-layer auto-encoders. The unique advantage is that it can extract view invariant feature from any view using only one model, and view estimation is not needed. The proposed method is evaluated on a large dataset, CASIA Gait Dataset B. The experimental results show that it can achieve state-of-the-art performance, and the improvement is more obvious when the view variance is larger.
AB - Gait recognition has been proved useful in human identification at a distance. But view variance of gait feature is always a great challenge because of the difference in appearance. If the view of the probe is different from that of the gallery, one view transformation model can be employed to convert the gait feature from one view to another. But most existing models need to estimate the view angle first, and can work for only one view pair. They can not convert multi-view data to one specific view efficiently. We employ one deep model based on auto-encoder for view invariant gait extraction. The model can synthesize gait feature in a progressive way by stacked multi-layer auto-encoders. The unique advantage is that it can extract view invariant feature from any view using only one model, and view estimation is not needed. The proposed method is evaluated on a large dataset, CASIA Gait Dataset B. The experimental results show that it can achieve state-of-the-art performance, and the improvement is more obvious when the view variance is larger.
UR - http://www.scopus.com/inward/record.url?scp=85019149104&partnerID=8YFLogxK
U2 - 10.1109/ICPR.2016.7899748
DO - 10.1109/ICPR.2016.7899748
M3 - Conference contribution
AN - SCOPUS:85019149104
T3 - Proceedings - International Conference on Pattern Recognition
SP - 889
EP - 894
BT - 2016 23rd International Conference on Pattern Recognition, ICPR 2016
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 23rd International Conference on Pattern Recognition, ICPR 2016
Y2 - 4 December 2016 through 8 December 2016
ER -