Multimodal medical image retrieval improving precision at image CLEF 2009

Saïd Radhouani, Jayashree Kalpathy-Cramer, Steven Bedrick, Brian Bakke, William Hersh

Research output: Contribution to journalConference article

2 Scopus citations

Abstract

We present results from Oregon Health & Science University's participation in the medical retrieval task of Image CLEF 2009. This year, we focused on improving retrieval performance, especially early precision, in the task of solving medical multimodal queries. These queries contain visual data, given as a set of image-examples, and textual data, provided as a set of words belonging to three dimensions: Anatomy, Pathology, and Modality. To solve these queries, we use both textual and visual data in order to better interpret the semantic content of the queries. Indeed, using the textual data associated with the image, it is relatively easy to extract anatomy and pathology, but it is challenging to extract the modality, since this is not always explicitly described in the text. To overcome this problem, we utilized the visual data. We combined both text-based and visual-based search techniques to provide a unique ranked list of relevant documents for each query. The obtained results showed that our approach outperforms our baseline by 43% in MAP and 71% in precision at top 5 documents. This is due to the use of domain dimensions and the combination of both visual-based and text-based search techniques.

Original languageEnglish (US)
JournalCEUR Workshop Proceedings
Volume1175
StatePublished - Jan 1 2009
Event2009 Cross Language Evaluation Forum Workshop, CLEF 2009, co-located with the 13th European Conference on Digital Libraries, ECDL 2009 - Corfu, Greece
Duration: Sep 30 2009Oct 2 2009

Keywords

  • Domain dimensions
  • Image classification
  • Image modality extraction
  • Medical image retrieval
  • Performance evaluation

ASJC Scopus subject areas

  • Computer Science(all)

Fingerprint Dive into the research topics of 'Multimodal medical image retrieval improving precision at image CLEF 2009'. Together they form a unique fingerprint.

  • Cite this