Abstract
Sound plays an important role in human daily life as humans use sound to communicate with each other and to understand the events occurring in the surroundings. This has prompted the researchers to further study on how to automatically identify the event that is happening by analyzing the acoustic signal. This paper presents a deep learning model enhanced by compressed sensing techniques for acoustic event classification. The compressed sensing first transforms the input acoustic signal into a reconstructed signal to reduce the noise in the input acoustic signal. The reconstructed signals are then fed into a 1-dimensional convolutional neural network (1D-CNN) to train a deep learning model for the acoustic event classification. In addition, the dropout regularization is leveraged in the 1D-CNN to mitigate the overfitting problems. The proposed compressed sensing with 1D-CNN was evaluated on three benchmark datasets, namely Soundscapes1, Soundscapes2, and UrbanSound8K, and achieved F1-scores of 80.5%, 81.1%, and 69.2%, respectively.
Original language | English |
---|---|
Pages (from-to) | 735-741 |
Number of pages | 7 |
Journal | Signal, Image and Video Processing |
Volume | 17 |
Issue number | 3 |
DOIs | |
Publication status | Published - Apr 2023 |
Externally published | Yes |
Keywords
- 1D convolutional neural network
- 1D-CNN
- Acoustic event classification
- Compressed sensing
ASJC Scopus subject areas
- Signal Processing
- Electrical and Electronic Engineering