Overview of the ImageCLEFmed 2006 medical retrieval and annotation tasks

Henning Müller, Thomas Deselaers, Thomas Lehmann, Paul Clough, Eugene Kim, William (Bill) Hersh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Citations (Scopus)

Abstract

This paper describes the medial image retrieval and the medical annotation tasks of ImageCLEF 2006. These tasks are described in a separate paper from the other task to reduce the size of the overview papaer.These two medical tasks are described separately with respect to the goals, databases used, topics created and distributed among participants, results and techniques used. The best performing techniques are described in more detail to provide better insights about successful strategies. Some ideas for future tasks are also presented. The ImageCLEFmed medical image retrieval task had 12 participating groups and received 100 submitted runs. Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual, runs but most runs were mixed, i.e., using visual and textual information. None of the manual or interactive techniques were significantly better than those used for the automatic runs. The best-performing systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve a system's performance. Purely visual systems only performed well on the visual topics. The medical automatic annotation used a larger database in 2006, with 10'000 training images and 116 classes, up from 57 in 2005. Twelve participating groups submitted 27 runs. Despite the much larger number of classes, results were almost as good as in 2005 and a clear improvement in performance could be shown. The best-performing system of 2005 would have only received a position in the upper middle part in 2006.

Original languageEnglish (US)
Title of host publicationCLEF 2005 - Working Notes for CLEF 2005 Workshop, co-located with the 9th European Conference on Digital Libraries, ECDL 2005
PublisherCEUR-WS
Volume1172
StatePublished - 2006
Event2006 Cross Language Evaluation Forum Workshop, CLEF 2006, co-located with the 10th European Conference on Digital Libraries, ECDL 2006 - Alicante, Spain
Duration: Sep 20 2006Sep 22 2006

Other

Other2006 Cross Language Evaluation Forum Workshop, CLEF 2006, co-located with the 10th European Conference on Digital Libraries, ECDL 2006
CountrySpain
CityAlicante
Period9/20/069/22/06

Fingerprint

Image retrieval

Keywords

  • Image classification
  • Image retrieval
  • Medical imaging
  • Performance evaluation

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Müller, H., Deselaers, T., Lehmann, T., Clough, P., Kim, E., & Hersh, W. B. (2006). Overview of the ImageCLEFmed 2006 medical retrieval and annotation tasks. In CLEF 2005 - Working Notes for CLEF 2005 Workshop, co-located with the 9th European Conference on Digital Libraries, ECDL 2005 (Vol. 1172). CEUR-WS.

Overview of the ImageCLEFmed 2006 medical retrieval and annotation tasks. / Müller, Henning; Deselaers, Thomas; Lehmann, Thomas; Clough, Paul; Kim, Eugene; Hersh, William (Bill).

CLEF 2005 - Working Notes for CLEF 2005 Workshop, co-located with the 9th European Conference on Digital Libraries, ECDL 2005. Vol. 1172 CEUR-WS, 2006.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Müller, H, Deselaers, T, Lehmann, T, Clough, P, Kim, E & Hersh, WB 2006, Overview of the ImageCLEFmed 2006 medical retrieval and annotation tasks. in CLEF 2005 - Working Notes for CLEF 2005 Workshop, co-located with the 9th European Conference on Digital Libraries, ECDL 2005. vol. 1172, CEUR-WS, 2006 Cross Language Evaluation Forum Workshop, CLEF 2006, co-located with the 10th European Conference on Digital Libraries, ECDL 2006, Alicante, Spain, 9/20/06.
Müller H, Deselaers T, Lehmann T, Clough P, Kim E, Hersh WB. Overview of the ImageCLEFmed 2006 medical retrieval and annotation tasks. In CLEF 2005 - Working Notes for CLEF 2005 Workshop, co-located with the 9th European Conference on Digital Libraries, ECDL 2005. Vol. 1172. CEUR-WS. 2006
Müller, Henning ; Deselaers, Thomas ; Lehmann, Thomas ; Clough, Paul ; Kim, Eugene ; Hersh, William (Bill). / Overview of the ImageCLEFmed 2006 medical retrieval and annotation tasks. CLEF 2005 - Working Notes for CLEF 2005 Workshop, co-located with the 9th European Conference on Digital Libraries, ECDL 2005. Vol. 1172 CEUR-WS, 2006.
@inproceedings{d7547e9173aa4f74a01be2a5e1af7f91,
title = "Overview of the ImageCLEFmed 2006 medical retrieval and annotation tasks",
abstract = "This paper describes the medial image retrieval and the medical annotation tasks of ImageCLEF 2006. These tasks are described in a separate paper from the other task to reduce the size of the overview papaer.These two medical tasks are described separately with respect to the goals, databases used, topics created and distributed among participants, results and techniques used. The best performing techniques are described in more detail to provide better insights about successful strategies. Some ideas for future tasks are also presented. The ImageCLEFmed medical image retrieval task had 12 participating groups and received 100 submitted runs. Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual, runs but most runs were mixed, i.e., using visual and textual information. None of the manual or interactive techniques were significantly better than those used for the automatic runs. The best-performing systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve a system's performance. Purely visual systems only performed well on the visual topics. The medical automatic annotation used a larger database in 2006, with 10'000 training images and 116 classes, up from 57 in 2005. Twelve participating groups submitted 27 runs. Despite the much larger number of classes, results were almost as good as in 2005 and a clear improvement in performance could be shown. The best-performing system of 2005 would have only received a position in the upper middle part in 2006.",
keywords = "Image classification, Image retrieval, Medical imaging, Performance evaluation",
author = "Henning M{\"u}ller and Thomas Deselaers and Thomas Lehmann and Paul Clough and Eugene Kim and Hersh, {William (Bill)}",
year = "2006",
language = "English (US)",
volume = "1172",
booktitle = "CLEF 2005 - Working Notes for CLEF 2005 Workshop, co-located with the 9th European Conference on Digital Libraries, ECDL 2005",
publisher = "CEUR-WS",

}

TY - GEN

T1 - Overview of the ImageCLEFmed 2006 medical retrieval and annotation tasks

AU - Müller, Henning

AU - Deselaers, Thomas

AU - Lehmann, Thomas

AU - Clough, Paul

AU - Kim, Eugene

AU - Hersh, William (Bill)

PY - 2006

Y1 - 2006

N2 - This paper describes the medial image retrieval and the medical annotation tasks of ImageCLEF 2006. These tasks are described in a separate paper from the other task to reduce the size of the overview papaer.These two medical tasks are described separately with respect to the goals, databases used, topics created and distributed among participants, results and techniques used. The best performing techniques are described in more detail to provide better insights about successful strategies. Some ideas for future tasks are also presented. The ImageCLEFmed medical image retrieval task had 12 participating groups and received 100 submitted runs. Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual, runs but most runs were mixed, i.e., using visual and textual information. None of the manual or interactive techniques were significantly better than those used for the automatic runs. The best-performing systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve a system's performance. Purely visual systems only performed well on the visual topics. The medical automatic annotation used a larger database in 2006, with 10'000 training images and 116 classes, up from 57 in 2005. Twelve participating groups submitted 27 runs. Despite the much larger number of classes, results were almost as good as in 2005 and a clear improvement in performance could be shown. The best-performing system of 2005 would have only received a position in the upper middle part in 2006.

AB - This paper describes the medial image retrieval and the medical annotation tasks of ImageCLEF 2006. These tasks are described in a separate paper from the other task to reduce the size of the overview papaer.These two medical tasks are described separately with respect to the goals, databases used, topics created and distributed among participants, results and techniques used. The best performing techniques are described in more detail to provide better insights about successful strategies. Some ideas for future tasks are also presented. The ImageCLEFmed medical image retrieval task had 12 participating groups and received 100 submitted runs. Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual, runs but most runs were mixed, i.e., using visual and textual information. None of the manual or interactive techniques were significantly better than those used for the automatic runs. The best-performing systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve a system's performance. Purely visual systems only performed well on the visual topics. The medical automatic annotation used a larger database in 2006, with 10'000 training images and 116 classes, up from 57 in 2005. Twelve participating groups submitted 27 runs. Despite the much larger number of classes, results were almost as good as in 2005 and a clear improvement in performance could be shown. The best-performing system of 2005 would have only received a position in the upper middle part in 2006.

KW - Image classification

KW - Image retrieval

KW - Medical imaging

KW - Performance evaluation

UR - http://www.scopus.com/inward/record.url?scp=84922041858&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84922041858&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84922041858

VL - 1172

BT - CLEF 2005 - Working Notes for CLEF 2005 Workshop, co-located with the 9th European Conference on Digital Libraries, ECDL 2005

PB - CEUR-WS

ER -