Deep Learning for the Diagnosis of Stage in Retinopathy of Prematurity: Accuracy and Generalizability across Populations and Cameras

Jimmy S. Chen, Aaron S. Coyner, Susan Ostmo, Kemal Sonmez, Sanyam Bajimaya, Eli Pradhan, Nita Valikodath, Emily D. Cole, Tala Al-Khaled, R. V.Paul Chan, Praveer Singh, Jayashree Kalpathy-Cramer, Michael F. Chiang, J. Peter Campbell

Research output: Contribution to journalArticlepeer-review

10 Scopus citations


Purpose: Stage is an important feature to identify in retinal images of infants at risk of retinopathy of prematurity (ROP). The purpose of this study was to implement a convolutional neural network (CNN) for binary detection of stages 1, 2, and 3 in ROP and to evaluate its generalizability across different populations and camera systems. Design: Diagnostic validation study of CNN for stage detection. Participants: Retinal fundus images obtained from preterm infants during routine ROP screenings. Methods: Two datasets were used: 5943 fundus images obtained by RetCam camera (Natus Medical, Pleasanton, CA) from 9 North American institutions and 5049 images obtained by 3nethra camera (Forus Health Incorporated, Bengaluru, India) from 4 hospitals in Nepal. Images were labeled based on the presence of stage by 1 to 3 expert graders. Three CNN models were trained using 5-fold cross-validation on datasets from North America alone, Nepal alone, and a combined dataset and were evaluated on 2 held-out test sets consisting of 708 and 247 images from the Nepali and North American datasets, respectively. Main Outcome Measures: Convolutional neural network performance was evaluated using area under the receiver operating characteristic curve (AUROC), area under the precision-recall curve (AUPRC), sensitivity, and specificity. Results: Both the North American– and Nepali-trained models demonstrated high performance on a test set from the same population: AUROC, 0.99; AUPRC, 0.98; sensitivity, 94%; and AUROC, 0.97; AUPRC, 0.91; and sensitivity, 73%; respectively. However, the performance of each model decreased to AUROC of 0.96 and AUPRC of 0.88 (sensitivity, 52%) and AUROC of 0.62 and AUPRC of 0.36 (sensitivity, 44%) when evaluated on a test set from the other population. Compared with the models trained on individual datasets, the model trained on a combined dataset achieved improved performance on each respective test set: sensitivity improved from 94% to 98% on the North American test set and from 73% to 82% on the Nepali test set. Conclusions: A CNN can identify accurately the presence of ROP stage in retinal images, but performance depends on the similarity between training and testing populations. We demonstrated that internal and external performance can be improved by increasing the heterogeneity of the training dataset features of the training dataset, in this case by combining images from different populations and cameras.

Original languageEnglish (US)
Pages (from-to)1027-1035
Number of pages9
JournalOphthalmology Retina
Issue number10
StatePublished - Oct 2021


  • Artificial intelligence
  • Generalizability
  • Neural networks
  • Retinopathy of prematurity
  • Stage

ASJC Scopus subject areas

  • Ophthalmology


Dive into the research topics of 'Deep Learning for the Diagnosis of Stage in Retinopathy of Prematurity: Accuracy and Generalizability across Populations and Cameras'. Together they form a unique fingerprint.

Cite this