Abstract
Ridge regression is widely used in multiple variable data analysis. However, in very high-dimensional cases such as image feature extraction and recognition, conventional ridge regression or its extensions have the small-class problem, that is, the number of the projections obtained by ridge regression is limited by the number of the classes. In this paper, we proposed a novel method called generalized robust regression (GRR) for jointly sparse subspace learning which can address the problem. GRR not only imposes L 2,1 -norm penalty on both loss function and regularization term to guarantee the joint sparsity and the robustness to outliers for effective feature selection, but also utilizes L 2,1 -norm as the measurement to take the intrinsic local geometric structure of the data into consideration to improve the performance. Moreover, by incorporating the elastic factor on the loss function, GRR can enhance the robustness to obtain more projections for feature selection or classification. To obtain the optimal solution of GRR, an iterative algorithm was proposed and the convergence was also proved. Experiments on six well-known data sets demonstrate the merits of the proposed method. The result indicates that GRR is a robust and efficient regression method for face recognition.
Original language | English |
---|---|
Article number | 8307183 |
Pages (from-to) | 756-772 |
Number of pages | 17 |
Journal | IEEE Transactions on Circuits and Systems for Video Technology |
Volume | 29 |
Issue number | 3 |
DOIs | |
Publication status | Published - Mar 2019 |
Externally published | Yes |
Keywords
- Ridge regression
- face recognition
- feature selection
- small-class problem
- subspace learning
ASJC Scopus subject areas
- Media Technology
- Electrical and Electronic Engineering