Abstract
Detecting 3D mask attacks to a face recognition system is challenging. Although genuine faces and 3D face masks show significantly different remote photoplethysmography (rPPG) signals, rPPG-based face anti-spoofing methods often suffer from performance degradation due to unstable face alignment in the video sequence and weak rPPG signals. To enhance the rPPG signal in a motion-robust way, a landmark-anchored face stitching method is proposed to align the faces robustly and precisely at the pixel-wise level by using both SIFT keypoints and facial landmarks. To better encode the rPPG signal, a weighted spatial-temporal representation is proposed, which emphasizes the face regions with rich blood vessels. In addition, characteristics of rPPG signals in different color spaces are jointly utilized. To improve the generalization capability, a lightweight EfficientNet with a Gated Recurrent Unit (GRU) is designed to extract both spatial and temporal features from the rPPG spatial-temporal representation for classification. The proposed method is compared with the state-of-the-art methods on five benchmark datasets under both intra-dataset and cross-dataset evaluations. The proposed method shows a significant and consistent improvement in performance over other state-of-the-art rPPG-based methods for face spoofing detection.
Original language | English |
---|---|
Pages (from-to) | 4313-4328 |
Number of pages | 16 |
Journal | IEEE Transactions on Information Forensics and Security |
Volume | 18 |
DOIs | |
Publication status | Published - 2023 |
Keywords
- 3D mask attack
- EfficientNet
- Landmark-anchored face stitching
- face spoofing detection
- rPPG
ASJC Scopus subject areas
- Safety, Risk, Reliability and Quality
- Computer Networks and Communications