TY - GEN
T1 - Overview of the ImageCLEFmed 2006 medical retrieval and medical annotation tasks
AU - Müller, Henning
AU - Deselaers, Thomas
AU - Deserno, Thomas
AU - Clough, Paul
AU - Kim, Eugene
AU - Hersh, William
PY - 2007
Y1 - 2007
N2 - This paper describes the medical image retrieval and annotation tasks of ImageCLEF 2006. Both tasks are described with respect to goals, databases, topics, results, and techniques. The ImageCLEFmed retrieval task had 12 participating groups (100 runs). Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual runs but most were mixed, using visual and textual information. None of the manual or interactive techniques were significantly better than automatic runs. The bestperforming systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve performance. Purely visual systems only performed well on visual topics. The medical automatic annotation used a larger database of 10,000 training images from 116 classes, up from 9,000 images from 57 classes in 2005. Twelve groups submitted 28 runs. Despite the larger number of classes, results were almost as good as in 2005 which demonstrates a clear improvement in performance. The best system of 2005 would have received a position in the middle in 2006.
AB - This paper describes the medical image retrieval and annotation tasks of ImageCLEF 2006. Both tasks are described with respect to goals, databases, topics, results, and techniques. The ImageCLEFmed retrieval task had 12 participating groups (100 runs). Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual runs but most were mixed, using visual and textual information. None of the manual or interactive techniques were significantly better than automatic runs. The bestperforming systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve performance. Purely visual systems only performed well on visual topics. The medical automatic annotation used a larger database of 10,000 training images from 116 classes, up from 9,000 images from 57 classes in 2005. Twelve groups submitted 28 runs. Despite the larger number of classes, results were almost as good as in 2005 which demonstrates a clear improvement in performance. The best system of 2005 would have received a position in the middle in 2006.
KW - Automatic image annotation
KW - Image retrieval
KW - Medical information retrieval
UR - http://www.scopus.com/inward/record.url?scp=38149141656&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=38149141656&partnerID=8YFLogxK
U2 - 10.1007/978-3-540-74999-8_72
DO - 10.1007/978-3-540-74999-8_72
M3 - Conference contribution
AN - SCOPUS:38149141656
SN - 9783540749981
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 595
EP - 608
BT - Evaluation of Multilingual and Multi-modal Information Retrieval - 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006, Revised Selected Papers
PB - Springer-Verlag
T2 - 7th Workshop of the Cross-Language Evaluation Forum, CLEF 2006
Y2 - 20 September 2006 through 22 September 2006
ER -