Abstract
Anomaly detection is a challenging task, especially detecting and segmenting tiny defect regions in images without anomaly priors. Although deep encoder-decoder-based convolutional neural networks have achieved good anomaly detection results, existing methods operate uniformly on all extracted image features without considering disentangling these features. To fully explore the texture and semantic information of images, A novel unsupervised anomaly detection method is proposed. Specifically, discriminative features are extracted from images by using a deep pre-trained network, where shallow and deep features are aggregated into texture and semantic modules, respectively. Then, a feature fusion module is developed to interactively enable feature information in two different modules. The texture and semantic segmentation results are obtained by comparing the texture features and semantic features before and after reconstruction, respectively. Finally, an anomaly segmentation module is designed to generate anomaly detection results by integrating the results of the texture and semantic modules by setting a threshold. Experimental results on benchmark datasets for anomaly detection demonstrate that our proposed method can efficiently and effectively detect anomalies, outperforming some state-of-the-art methods by 2.7% and 0.6% in classification and segmentation.
Original language | English |
---|---|
Pages (from-to) | 829-843 |
Number of pages | 15 |
Journal | IET Computer Vision |
Volume | 17 |
Issue number | 7 |
DOIs | |
Publication status | Published - Oct 2023 |
Externally published | Yes |
Keywords
- textile industry
- vision defects
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition