Consistency Preservation and Feature Entropy Regularization for GAN Based Face Editing

Weicheng Xie, Wenya Lu, Zhibin Peng, Linlin Shen

Research output: Journal PublicationArticlepeer-review

2 Citations (Scopus)

Abstract

Generative Adversarial Network (GAN) has been widely used for image-to-image translation-based facial attribute editing. Existing GAN networks are likely to generate samples with anomalies, which may be caused by the lack of consistency preservation and feature entanglement. For preserving image consistency, many studies resorted to the design of the network framework and loss functions, e.g. cycle-consistency loss. However, the generator with the cycle-consistency loss could not well preserve the attribute-irrelevant features, and its feature-level noises may possibly cause synthesis abnormalities. For feature disentanglement, previous works were devoted to mining the implicit semantics of feature spaces, while these semantics are not stable and intuitive enough. For consistency preservation, we propose a target consistency loss to complement the cycle-consistency loss, and enable the network to learn to preserve features of the image more directly. Meanwhile, we filter out outlier feature maps to reduce the synthesis abnormalities and propose a dynamic dropout to better preserve the attribute-irrelevant features. For feature disentanglement, we encode the image semantics more stably and intuitively and propose an entropy regularization to decouple these semantics to allow independent editing of different attributes. The proposed modules are general and can be easily integrated with available image-to-image-based GAN models like StarGAN, AttGAN, and STGAN. Extensive experiments on CelebA dataset show that the our strategy can largely reduce the artifacts and better preserve the subtle facial features, and thus significantly improve the facial editing performance of these mainstream GAN models, in terms of FID, PSNR and SSIM. Additional experiments on realistic expression editing show that our method outperforms StarGAN on RaFD, and achieves much better generalization performances than the three baselines on datasets of FFHQ, RaFD and LFW.

Original languageEnglish
Article number3289757
Pages (from-to)8892-8905
Number of pages14
JournalIEEE Transactions on Multimedia
Volume25
DOIs
Publication statusPublished - 2023
Externally publishedYes

Keywords

  • Consistency preservation
  • Entropy regularization
  • GAN
  • Self-adaptive dropout

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Computer Science Applications
  • Electrical and Electronic Engineering

Cite this