TY - GEN
T1 - Efficient optic cup detection from intra-image learning with retinal structure priors
AU - Xu, Yanwu
AU - Liu, Jiang
AU - Lin, Stephen
AU - Xu, Dong
AU - Cheung, Carol Y.
AU - Aung, Tin
AU - Wong, Tien Yin
N1 - Publisher Copyright:
© Springer-Verlag Berlin Heidelberg 2012.
PY - 2012
Y1 - 2012
N2 - We present a superpixel based learning framework based on retinal structure priors for glaucoma diagnosis. In digital fundus photographs, our method automatically localizes the optic cup, which is the primary image component clinically used for identifying glaucoma. This method provides three major contributions. First, it proposes processing of the fundus images at the superpixel level, which leads to features more descriptive and effective than those employed by pixel-based techniques, while yielding significant computational savings over methods based on sliding windows. Second, the classifier learning process does not rely on pre-labeled training samples, but rather the training samples are extracted from the test image itself using structural priors on relative cup and disc positions. Third, we present a classification refinement scheme that utilizes both structural priors and local context. Tested on the ORIGA−light clinical dataset comprised of 650 images, the proposed method achieves a 26.7% non-overlap ratio with manually-labeled ground-truth and a 0.081 absolute cup-to-disc ratio (CDR) error, a simple yet widely used diagnostic measure. This level of accuracy is comparable to or higher than the state-of-the-art technique [1], with a speedup factor of tens or hundreds.
AB - We present a superpixel based learning framework based on retinal structure priors for glaucoma diagnosis. In digital fundus photographs, our method automatically localizes the optic cup, which is the primary image component clinically used for identifying glaucoma. This method provides three major contributions. First, it proposes processing of the fundus images at the superpixel level, which leads to features more descriptive and effective than those employed by pixel-based techniques, while yielding significant computational savings over methods based on sliding windows. Second, the classifier learning process does not rely on pre-labeled training samples, but rather the training samples are extracted from the test image itself using structural priors on relative cup and disc positions. Third, we present a classification refinement scheme that utilizes both structural priors and local context. Tested on the ORIGA−light clinical dataset comprised of 650 images, the proposed method achieves a 26.7% non-overlap ratio with manually-labeled ground-truth and a 0.081 absolute cup-to-disc ratio (CDR) error, a simple yet widely used diagnostic measure. This level of accuracy is comparable to or higher than the state-of-the-art technique [1], with a speedup factor of tens or hundreds.
UR - http://www.scopus.com/inward/record.url?scp=84885897327&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-33415-3_8
DO - 10.1007/978-3-642-33415-3_8
M3 - Conference contribution
AN - SCOPUS:84885897327
SN - 9783642334146
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 58
EP - 65
BT - Medical Image Computing and Computer-Assisted Intervention, MICCAI2012 - 15th International Conference, Proceedings
A2 - Ayache, Nicholas
A2 - Delingette, Herve
A2 - Golland, Polina
A2 - Mori, Kensaku
PB - Springer Verlag
T2 - 15th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2012
Y2 - 1 October 2012 through 5 October 2012
ER -