Overview of the ImageCLEF 2007 medical retrieval and annotation tasks

Henning Müller, Thomas Deselaers, Eugene Kim, Jayashree Kalpathy-Cramer, Thomas M. Deserno, William Hersh

Research output: Contribution to journalConference articlepeer-review

38 Scopus citations

Abstract

This paper describes the medical image retrieval and medical image annotation tasks of ImageCLEF 2007. Separate sections describe each of the two tasks, with the participation and an evaluation of major findings from the results of each given. A total of 13 groups participated in the medical retrieval task and 10 in the medical annotation task. The medical retrieval task added two news data sets for a total of over 66'000 images. Tasks were derived from a log file of the Pubmed biomedical literature search system, creating realistic information needs with a clear user model in mind. The medical annotation task was in 2007 organised in a new format as a hierarchical classification had to be performed and classification could be stopped at any confidence level. This required algorithms to change significantly and to integrate a confidence level into their decisions to be able to judge where to stop classification to avoid making mistakes in the hierarchy. Scoring took into account errors and unclassified parts.

Original languageEnglish (US)
JournalCEUR Workshop Proceedings
Volume1173
StatePublished - 2007
Event2007 Cross Language Evaluation Forum Workshop, CLEF 2007, co-located with the 11th European Conference on Digital Libraries, ECDL 2007 - Budapest, Hungary
Duration: Sep 19 2007Sep 21 2007

Keywords

  • Image classification
  • Image retrieval
  • Medical imaging
  • Performance evaluation

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'Overview of the ImageCLEF 2007 medical retrieval and annotation tasks'. Together they form a unique fingerprint.

Cite this