Abstract
This paper describes the medical image retrieval and medical image annotation tasks of ImageCLEF 2007. Separate sections describe each of the two tasks, with the participation and an evaluation of major findings from the results of each given. A total of 13 groups participated in the medical retrieval task and 10 in the medical annotation task. The medical retrieval task added two news data sets for a total of over 66'000 images. Tasks were derived from a log file of the Pubmed biomedical literature search system, creating realistic information needs with a clear user model in mind. The medical annotation task was in 2007 organised in a new format as a hierarchical classification had to be performed and classification could be stopped at any confidence level. This required algorithms to change significantly and to integrate a confidence level into their decisions to be able to judge where to stop classification to avoid making mistakes in the hierarchy. Scoring took into account errors and unclassified parts.
Original language | English (US) |
---|---|
Journal | CEUR Workshop Proceedings |
Volume | 1173 |
State | Published - 2007 |
Event | 2007 Cross Language Evaluation Forum Workshop, CLEF 2007, co-located with the 11th European Conference on Digital Libraries, ECDL 2007 - Budapest, Hungary Duration: Sep 19 2007 → Sep 21 2007 |
Keywords
- Image classification
- Image retrieval
- Medical imaging
- Performance evaluation
ASJC Scopus subject areas
- General Computer Science