Abstract
Conventional linear discriminant analysis and its extended versions have some potential drawbacks. First, they are sensitive to outliers, noise, and variations in data, which degrades their performances in dimensionality reduction. Second, most of the linear discriminant analysis-based methods only focus on the global structures of data but ignore their local geometric structures, which play important roles in dimensionality reduction. More importantly, the total number of projections obtained by linear discriminant analysis (LDA) based methods are limited by the class number in the training data set. To solve the problems mentioned above, we propose a novel method called robust locally discriminant analysis via capped norm (RLDA), in this paper. By replacing L-{2} -norm with L-{2,1} -norm to construct the robust between-class scatter matrix and using the capped norm to further reduce the negative impact of outliers in constructing the within-class scatter matrix, we can guarantee the robustness of the proposed methods. In addition, we also impose L-{2,1} -norm regularized term on projection matrix, so that its joint sparsity can be ensured. Since we redefine the scatter matrices in traditional LDA, the projection numbers we obtain are no longer restricted by the class numbers. The experimental results show the superior performance of RLDA to other compared dimensionality reduction methods.
Original language | English |
---|---|
Article number | 8561275 |
Pages (from-to) | 4641-4652 |
Number of pages | 12 |
Journal | IEEE Access |
Volume | 7 |
DOIs | |
Publication status | Published - 2019 |
Externally published | Yes |
Keywords
- Feature extraction
- L-regularization
- capped L-norm loss
- discriminant analysis.
- manifold learning
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering