TY - GEN
T1 - LoGo Transformer
T2 - 2023 International Joint Conference on Neural Networks, IJCNN 2023
AU - Zhang, Yinglin
AU - Cai, Zichao
AU - Higashita, Risa
AU - Liu, Jiang
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Corneal endothelial cell segmentation plays an important role in quantifying clinical indicators for the cornea health state evaluation. Although Convolution Neural Networks (CNNs) are widely used for medical image segmentation, their receptive fields are limited. Recently, Transformer outperforms convolution in modeling long-range dependencies but lacks local inductive bias so the pure transformer network is difficult to train on small medical image datasets. Moreover, Transformer networks cannot be effectively adopted for secular microscopes as they are parameter-heavy and computationally complex. To this end, we find that appropriately limiting attention spans and modeling information at different granularity can introduce local constraints and enhance attention representations. This paper explores a hierarchy full self-attention lightweight network for medical image segmentation, using Local and Global (LoGo) transformers to separately model attention representation at low-level and high-level layers. Specifically, the local efficient transformer (LoTr) layer is employed to decompose features into finer-grained elements to model local attention representation, while the global axial transformer (GoTr) is utilized to build long-range dependencies across the entire feature space. With this hierarchy structure, we gradually aggregate the semantic features from different levels efficiently. Experiment results on segmentation tasks of the corneal endothelial cell, the ciliary body, and the liver prove the accuracy, effectiveness, and robustness of our method. Compared with the convolution neural networks (CNNs) and the hybrid CNN-Transformer state-of-the-art (SOTA) methods, the LoGo transformer obtains the best result.
AB - Corneal endothelial cell segmentation plays an important role in quantifying clinical indicators for the cornea health state evaluation. Although Convolution Neural Networks (CNNs) are widely used for medical image segmentation, their receptive fields are limited. Recently, Transformer outperforms convolution in modeling long-range dependencies but lacks local inductive bias so the pure transformer network is difficult to train on small medical image datasets. Moreover, Transformer networks cannot be effectively adopted for secular microscopes as they are parameter-heavy and computationally complex. To this end, we find that appropriately limiting attention spans and modeling information at different granularity can introduce local constraints and enhance attention representations. This paper explores a hierarchy full self-attention lightweight network for medical image segmentation, using Local and Global (LoGo) transformers to separately model attention representation at low-level and high-level layers. Specifically, the local efficient transformer (LoTr) layer is employed to decompose features into finer-grained elements to model local attention representation, while the global axial transformer (GoTr) is utilized to build long-range dependencies across the entire feature space. With this hierarchy structure, we gradually aggregate the semantic features from different levels efficiently. Experiment results on segmentation tasks of the corneal endothelial cell, the ciliary body, and the liver prove the accuracy, effectiveness, and robustness of our method. Compared with the convolution neural networks (CNNs) and the hybrid CNN-Transformer state-of-the-art (SOTA) methods, the LoGo transformer obtains the best result.
KW - Corneal endothelial cell segmentation
KW - Lightweight
KW - Robustness
KW - Transformer
UR - http://www.scopus.com/inward/record.url?scp=85169621589&partnerID=8YFLogxK
U2 - 10.1109/IJCNN54540.2023.10191116
DO - 10.1109/IJCNN54540.2023.10191116
M3 - Conference contribution
AN - SCOPUS:85169621589
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - IJCNN 2023 - International Joint Conference on Neural Networks, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 18 June 2023 through 23 June 2023
ER -