Overview of the ImageCLEFmed 2006 medical retrieval and medical annotation tasks

Henning Müller, Thomas Deselaers, Thomas Deserno, Paul Clough, Eugene Kim, William (Bill) Hersh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

55 Citations (Scopus)

Abstract

This paper describes the medical image retrieval and annotation tasks of ImageCLEF 2006. Both tasks are described with respect to goals, databases, topics, results, and techniques. The ImageCLEFmed retrieval task had 12 participating groups (100 runs). Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual runs but most were mixed, using visual and textual information. None of the manual or interactive techniques were significantly better than automatic runs. The bestperforming systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve performance. Purely visual systems only performed well on visual topics. The medical automatic annotation used a larger database of 10,000 training images from 116 classes, up from 9,000 images from 57 classes in 2005. Twelve groups submitted 28 runs. Despite the larger number of classes, results were almost as good as in 2005 which demonstrates a clear improvement in performance. The best system of 2005 would have received a position in the middle in 2006.

Original languageEnglish (US)
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Pages595-608
Number of pages14
Volume4730 LNCS
StatePublished - 2007
Event7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006 - Alicante, Spain
Duration: Sep 20 2006Sep 22 2006

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4730 LNCS
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006
CountrySpain
CityAlicante
Period9/20/069/22/06

Fingerprint

Annotation
Retrieval
Databases
Image retrieval
Visual System
Image Annotation
Image Retrieval
Medical Image
Vision
Demonstrate
Class

Keywords

  • Automatic image annotation
  • Image retrieval
  • Medical information retrieval

ASJC Scopus subject areas

  • Computer Science(all)
  • Biochemistry, Genetics and Molecular Biology(all)
  • Theoretical Computer Science

Cite this

Müller, H., Deselaers, T., Deserno, T., Clough, P., Kim, E., & Hersh, W. B. (2007). Overview of the ImageCLEFmed 2006 medical retrieval and medical annotation tasks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4730 LNCS, pp. 595-608). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 4730 LNCS).

Overview of the ImageCLEFmed 2006 medical retrieval and medical annotation tasks. / Müller, Henning; Deselaers, Thomas; Deserno, Thomas; Clough, Paul; Kim, Eugene; Hersh, William (Bill).

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 4730 LNCS 2007. p. 595-608 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 4730 LNCS).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Müller, H, Deselaers, T, Deserno, T, Clough, P, Kim, E & Hersh, WB 2007, Overview of the ImageCLEFmed 2006 medical retrieval and medical annotation tasks. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 4730 LNCS, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 4730 LNCS, pp. 595-608, 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006, Alicante, Spain, 9/20/06.
Müller H, Deselaers T, Deserno T, Clough P, Kim E, Hersh WB. Overview of the ImageCLEFmed 2006 medical retrieval and medical annotation tasks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 4730 LNCS. 2007. p. 595-608. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Müller, Henning ; Deselaers, Thomas ; Deserno, Thomas ; Clough, Paul ; Kim, Eugene ; Hersh, William (Bill). / Overview of the ImageCLEFmed 2006 medical retrieval and medical annotation tasks. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 4730 LNCS 2007. pp. 595-608 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{b80e0b77ccac4a31a7b69b23455d9724,
title = "Overview of the ImageCLEFmed 2006 medical retrieval and medical annotation tasks",
abstract = "This paper describes the medical image retrieval and annotation tasks of ImageCLEF 2006. Both tasks are described with respect to goals, databases, topics, results, and techniques. The ImageCLEFmed retrieval task had 12 participating groups (100 runs). Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual runs but most were mixed, using visual and textual information. None of the manual or interactive techniques were significantly better than automatic runs. The bestperforming systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve performance. Purely visual systems only performed well on visual topics. The medical automatic annotation used a larger database of 10,000 training images from 116 classes, up from 9,000 images from 57 classes in 2005. Twelve groups submitted 28 runs. Despite the larger number of classes, results were almost as good as in 2005 which demonstrates a clear improvement in performance. The best system of 2005 would have received a position in the middle in 2006.",
keywords = "Automatic image annotation, Image retrieval, Medical information retrieval",
author = "Henning M{\"u}ller and Thomas Deselaers and Thomas Deserno and Paul Clough and Eugene Kim and Hersh, {William (Bill)}",
year = "2007",
language = "English (US)",
isbn = "9783540749981",
volume = "4730 LNCS",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "595--608",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",

}

TY - GEN

T1 - Overview of the ImageCLEFmed 2006 medical retrieval and medical annotation tasks

AU - Müller, Henning

AU - Deselaers, Thomas

AU - Deserno, Thomas

AU - Clough, Paul

AU - Kim, Eugene

AU - Hersh, William (Bill)

PY - 2007

Y1 - 2007

N2 - This paper describes the medical image retrieval and annotation tasks of ImageCLEF 2006. Both tasks are described with respect to goals, databases, topics, results, and techniques. The ImageCLEFmed retrieval task had 12 participating groups (100 runs). Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual runs but most were mixed, using visual and textual information. None of the manual or interactive techniques were significantly better than automatic runs. The bestperforming systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve performance. Purely visual systems only performed well on visual topics. The medical automatic annotation used a larger database of 10,000 training images from 116 classes, up from 9,000 images from 57 classes in 2005. Twelve groups submitted 28 runs. Despite the larger number of classes, results were almost as good as in 2005 which demonstrates a clear improvement in performance. The best system of 2005 would have received a position in the middle in 2006.

AB - This paper describes the medical image retrieval and annotation tasks of ImageCLEF 2006. Both tasks are described with respect to goals, databases, topics, results, and techniques. The ImageCLEFmed retrieval task had 12 participating groups (100 runs). Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual runs but most were mixed, using visual and textual information. None of the manual or interactive techniques were significantly better than automatic runs. The bestperforming systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve performance. Purely visual systems only performed well on visual topics. The medical automatic annotation used a larger database of 10,000 training images from 116 classes, up from 9,000 images from 57 classes in 2005. Twelve groups submitted 28 runs. Despite the larger number of classes, results were almost as good as in 2005 which demonstrates a clear improvement in performance. The best system of 2005 would have received a position in the middle in 2006.

KW - Automatic image annotation

KW - Image retrieval

KW - Medical information retrieval

UR - http://www.scopus.com/inward/record.url?scp=38149141656&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=38149141656&partnerID=8YFLogxK

M3 - Conference contribution

SN - 9783540749981

VL - 4730 LNCS

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 595

EP - 608

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

ER -