Abstract
Medical image segmentation is fundamental for computer-aided diagnosis or surgery. Various attention modules are proposed to improve segmentation results, which exist some limitations for medical image segmentation, such as large computations, weak framework applicability, etc. To solve the problems, we propose a new attention module named FGAM, short for Feature Guided Attention Module, which is a simple but pluggable and effective module for medical image segmentation. The FGAM tries to dig out the feature representation ability in the encoder and decoder features. Specifically, the decoder shallow layer always contains abundant information, which is taken as a queryable feature dictionary in the FGAM. The module contains a parameter-free activator and can be deleted after various encoder-decoder networks’ training. The efficacy of the FGAM is proved on various encoder-decoder models based on five datasets, including four publicly available datasets and one in-house dataset.
Original language | English |
---|---|
Article number | 105628 |
Journal | Computers in Biology and Medicine |
Volume | 146 |
DOIs | |
Publication status | Published - Jul 2022 |
Externally published | Yes |
Keywords
- Attention mechanism
- Encoder-decoder network
- Medical image segmentation
ASJC Scopus subject areas
- Health Informatics
- Computer Science Applications