TY - GEN
T1 - GazeFlow
T2 - 2021 International Joint Conference on Neural Networks, IJCNN 2021
AU - Wu, Yong
AU - Liang, Hanbang
AU - Hou, Xianxu
AU - Shen, Linlin
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/7/18
Y1 - 2021/7/18
N2 - Gaze estimation often requires a large scale datasets with well annotated gaze information to train the estimator. However, such a dataset requires costive annotation and is usually very difficult to collect. Therefore, a number of gaze redirection approaches have been proposed to address such a problem. However, existing methods lack the ability to precisely synthesize images with target gaze and head pose in complex lighting scenes. As a powerful technique to model the distribution of given data, normalizing flows have the ability to generate photo-realistic images and provide flexible latent space manipulation. In this work, we present a novel flow-based generative model, GazeFlow11The code will be made available at https://github.com/CVI-SZU/GazeFlow, for gaze redirection. The visual results of gaze redirection show that the quality of eye images synthesized by GazeFlow is significantly higher than that of other approaches like Deep Warp and PRGAN. Our approach has also been applied to augment the training data to improve the accuracy of gaze estimators and significant improvement has been achieved for both within dataset and cross dataset experiments.
AB - Gaze estimation often requires a large scale datasets with well annotated gaze information to train the estimator. However, such a dataset requires costive annotation and is usually very difficult to collect. Therefore, a number of gaze redirection approaches have been proposed to address such a problem. However, existing methods lack the ability to precisely synthesize images with target gaze and head pose in complex lighting scenes. As a powerful technique to model the distribution of given data, normalizing flows have the ability to generate photo-realistic images and provide flexible latent space manipulation. In this work, we present a novel flow-based generative model, GazeFlow11The code will be made available at https://github.com/CVI-SZU/GazeFlow, for gaze redirection. The visual results of gaze redirection show that the quality of eye images synthesized by GazeFlow is significantly higher than that of other approaches like Deep Warp and PRGAN. Our approach has also been applied to augment the training data to improve the accuracy of gaze estimators and significant improvement has been achieved for both within dataset and cross dataset experiments.
UR - http://www.scopus.com/inward/record.url?scp=85116420910&partnerID=8YFLogxK
U2 - 10.1109/IJCNN52387.2021.9533913
DO - 10.1109/IJCNN52387.2021.9533913
M3 - Conference contribution
AN - SCOPUS:85116420910
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - IJCNN 2021 - International Joint Conference on Neural Networks, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 18 July 2021 through 22 July 2021
ER -