Applications of interpretability in deep learning models for ophthalmology

Adam M. Hanif, Sara Beqiri, Pearse A. Keane, J. Peter Campbell

Research output: Contribution to journalReview articlepeer-review

3 Scopus citations

Abstract

Purpose of reviewIn this article, we introduce the concept of model interpretability, review its applications in deep learning models for clinical ophthalmology, and discuss its role in the integration of artificial intelligence in healthcare.Recent findingsThe advent of deep learning in medicine has introduced models with remarkable accuracy. However, the inherent complexity of these models undermines its users' ability to understand, debug and ultimately trust them in clinical practice. Novel methods are being increasingly explored to improve models' 'interpretability' and draw clearer associations between their outputs and features in the input dataset. In the field of ophthalmology, interpretability methods have enabled users to make informed adjustments, identify clinically relevant imaging patterns, and predict outcomes in deep learning models.SummaryInterpretability methods support the transparency necessary to implement, operate and modify complex deep learning models. These benefits are becoming increasingly demonstrated in models for clinical ophthalmology. As quality standards for deep learning models used in healthcare continue to evolve, interpretability methods may prove influential in their path to regulatory approval and acceptance in clinical practice.

Original languageEnglish (US)
Pages (from-to)452-458
Number of pages7
JournalCurrent opinion in ophthalmology
Volume32
Issue number5
DOIs
StatePublished - Sep 1 2021

Keywords

  • artificial intelligence
  • convolutional neural network
  • deep learning
  • interpretability
  • machine learning

ASJC Scopus subject areas

  • Ophthalmology

Fingerprint

Dive into the research topics of 'Applications of interpretability in deep learning models for ophthalmology'. Together they form a unique fingerprint.

Cite this