Evaluating performance of biomedical image retrieval systems-An overview of the medical image retrieval task at ImageCLEF 2004-2013

Jayashree Kalpathy-Cramer, Alba García Seco de Herrera, Dina Demner-Fushman, Sameer Antani, Steven Bedrick, Henning Müller

Research output: Contribution to journalArticle

68 Citations (Scopus)

Abstract

Medical image retrieval and classification have been extremely active research topics over the past 15 years. Within the ImageCLEF benchmark in medical image retrieval and classification, a standard test bed was created that allows researchers to compare their approaches and ideas on increasingly large and varied data sets including generated ground truth. This article describes the lessons learned in ten evaluation campaigns. A detailed analysis of the data also highlights the value of the resources created.

Original languageEnglish (US)
Pages (from-to)55-61
Number of pages7
JournalComputerized Medical Imaging and Graphics
Volume39
DOIs
StatePublished - Jan 1 2015

Fingerprint

Image classification
Image retrieval
Benchmarking
Research Personnel
Research
Datasets

Keywords

  • Biomedical literature
  • Content-based retrieval
  • Image retrieval
  • Multimodal medical retrieval
  • Text-based image retrieval

ASJC Scopus subject areas

  • Radiology Nuclear Medicine and imaging
  • Health Informatics
  • Radiological and Ultrasound Technology
  • Computer Graphics and Computer-Aided Design
  • Computer Vision and Pattern Recognition

Cite this

Evaluating performance of biomedical image retrieval systems-An overview of the medical image retrieval task at ImageCLEF 2004-2013. / Kalpathy-Cramer, Jayashree; de Herrera, Alba García Seco; Demner-Fushman, Dina; Antani, Sameer; Bedrick, Steven; Müller, Henning.

In: Computerized Medical Imaging and Graphics, Vol. 39, 01.01.2015, p. 55-61.

Research output: Contribution to journalArticle

Kalpathy-Cramer, Jayashree ; de Herrera, Alba García Seco ; Demner-Fushman, Dina ; Antani, Sameer ; Bedrick, Steven ; Müller, Henning. / Evaluating performance of biomedical image retrieval systems-An overview of the medical image retrieval task at ImageCLEF 2004-2013. In: Computerized Medical Imaging and Graphics. 2015 ; Vol. 39. pp. 55-61.
@article{82257ebcaac4434a81ce4be585257be5,
title = "Evaluating performance of biomedical image retrieval systems-An overview of the medical image retrieval task at ImageCLEF 2004-2013",
abstract = "Medical image retrieval and classification have been extremely active research topics over the past 15 years. Within the ImageCLEF benchmark in medical image retrieval and classification, a standard test bed was created that allows researchers to compare their approaches and ideas on increasingly large and varied data sets including generated ground truth. This article describes the lessons learned in ten evaluation campaigns. A detailed analysis of the data also highlights the value of the resources created.",
keywords = "Biomedical literature, Content-based retrieval, Image retrieval, Multimodal medical retrieval, Text-based image retrieval",
author = "Jayashree Kalpathy-Cramer and {de Herrera}, {Alba Garc{\'i}a Seco} and Dina Demner-Fushman and Sameer Antani and Steven Bedrick and Henning M{\"u}ller",
year = "2015",
month = "1",
day = "1",
doi = "10.1016/j.compmedimag.2014.03.004",
language = "English (US)",
volume = "39",
pages = "55--61",
journal = "Computerized Medical Imaging and Graphics",
issn = "0895-6111",
publisher = "Elsevier Limited",

}

TY - JOUR

T1 - Evaluating performance of biomedical image retrieval systems-An overview of the medical image retrieval task at ImageCLEF 2004-2013

AU - Kalpathy-Cramer, Jayashree

AU - de Herrera, Alba García Seco

AU - Demner-Fushman, Dina

AU - Antani, Sameer

AU - Bedrick, Steven

AU - Müller, Henning

PY - 2015/1/1

Y1 - 2015/1/1

N2 - Medical image retrieval and classification have been extremely active research topics over the past 15 years. Within the ImageCLEF benchmark in medical image retrieval and classification, a standard test bed was created that allows researchers to compare their approaches and ideas on increasingly large and varied data sets including generated ground truth. This article describes the lessons learned in ten evaluation campaigns. A detailed analysis of the data also highlights the value of the resources created.

AB - Medical image retrieval and classification have been extremely active research topics over the past 15 years. Within the ImageCLEF benchmark in medical image retrieval and classification, a standard test bed was created that allows researchers to compare their approaches and ideas on increasingly large and varied data sets including generated ground truth. This article describes the lessons learned in ten evaluation campaigns. A detailed analysis of the data also highlights the value of the resources created.

KW - Biomedical literature

KW - Content-based retrieval

KW - Image retrieval

KW - Multimodal medical retrieval

KW - Text-based image retrieval

UR - http://www.scopus.com/inward/record.url?scp=84920271344&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84920271344&partnerID=8YFLogxK

U2 - 10.1016/j.compmedimag.2014.03.004

DO - 10.1016/j.compmedimag.2014.03.004

M3 - Article

C2 - 24746250

AN - SCOPUS:84920271344

VL - 39

SP - 55

EP - 61

JO - Computerized Medical Imaging and Graphics

JF - Computerized Medical Imaging and Graphics

SN - 0895-6111

ER -