A deep learning model for identifying diabetic retinopathy using optical coherence tomography angiography

Gahyung Ryu, Kyungmin Lee, Donggeun Park, Sang Hyun Park, Min Sagong

Research output: Contribution to journalArticlepeer-review

47 Scopus citations

Abstract

As the prevalence of diabetes increases, millions of people need to be screened for diabetic retinopathy (DR). Remarkable advances in technology have made it possible to use artificial intelligence to screen DR from retinal images with high accuracy and reliability, resulting in reducing human labor by processing large amounts of data in a shorter time. We developed a fully automated classification algorithm to diagnose DR and identify referable status using optical coherence tomography angiography (OCTA) images with convolutional neural network (CNN) model and verified its feasibility by comparing its performance with that of conventional machine learning model. Ground truths for classifications were made based on ultra-widefield fluorescein angiography to increase the accuracy of data annotation. The proposed CNN classifier achieved an accuracy of 91–98%, a sensitivity of 86–97%, a specificity of 94–99%, and an area under the curve of 0.919–0.976. In the external validation, overall similar performances were also achieved. The results were similar regardless of the size and depth of the OCTA images, indicating that DR could be satisfactorily classified even with images comprising narrow area of the macular region and a single image slab of retina. The CNN-based classification using OCTA is expected to create a novel diagnostic workflow for DR detection and referral.

Original languageEnglish
Article number23024
JournalScientific Reports
Volume11
Issue number1
DOIs
StatePublished - Dec 2021

Bibliographical note

Publisher Copyright:
© 2021, The Author(s).

Fingerprint

Dive into the research topics of 'A deep learning model for identifying diabetic retinopathy using optical coherence tomography angiography'. Together they form a unique fingerprint.

Cite this