Objective: Assess the performance of the SAPHIRE automated information retrieval system. Design: Comparative study of automated and human searching of a MEDLINE test collection. Measurements: Recall and precision of SAPHIRE were compared with those attributes of novice physicians, expert physicians, and librarians for a test collection of 75 queries and 2, 334 citations. Failure analysis assessed the efficacy of the Metathesaurus as a concept vocabulary; the reasons for retrieval of nonrelevant articles and nonretrieval of relevant articles; and the effect of changing the weighting formula for relevance ranking of retrieved articles. Results: Recall and precision of SAPHIRE were comparable to those of both physician groups, but less than those of librarians. Conclusion: The current version of the Metathesaurus, as utilized by SAPHIRE, was unable to represent the conceptual content of one-fourth of physician-generated MEDLINE queries. The most likely cause for retrieval of nonrelevant articles was the presence of some or all of the search terms in the article, with frequencies high enough to lead to retrieval. The most likely cause for non-retrieval of relevant articles was the absence of the actual terms from the query, with synonyms or hierarchically related terms present instead. There were significant variations in performance when SAPHIRE's concept-weighing formulas were modified.
|Original language||English (US)|
|Number of pages||10|
|Journal||Journal of the American Medical Informatics Association|
|Publication status||Published - Jan 1994|
ASJC Scopus subject areas
- Health Informatics