Assessing the Feasibility of Large-Scale Natural Language Processing in a Corpus of Ordinary Medical Records: A Lexical Analysis

William R. Hersh, Emily M. Campbell, Susan E. Malveau

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

Objective: Identify the lexical content of a large corpus of ordinary medical records to assess the feasibility of large-scale natural language processing. Methods: A corpus of 560 megabytes of medical record text from an academic medical center was broken into individual words and compared with the words in six medical vocabularies, a common word list, and a database of patient names. Unrecognized words were assessed for algorithmic and contextual approaches to identifying more words, while the remainder were analyzed for spelling correctness. Results: About 60% of the words occurred in the medical vocabularies, common word list, or names database. Of the remainder, one-third were recognizable by other means. Of the remaining unrecognizable words, over three-fourths represented correctly spelled real words and the rest were misspellings. Conclusions: Large-scale generalized natural language processing methods for the medical record will require expansion of existing vocabularies, spelling error correction, and other algorithmic approaches to map words into those from clinical vocabularies.

Original languageEnglish (US)
Pages (from-to)580-584
Number of pages5
JournalJournal of the American Medical Informatics Association
Volume4
Issue numberSUPPL.
StatePublished - 1997

ASJC Scopus subject areas

  • Health Informatics

Fingerprint

Dive into the research topics of 'Assessing the Feasibility of Large-Scale Natural Language Processing in a Corpus of Ordinary Medical Records: A Lexical Analysis'. Together they form a unique fingerprint.

Cite this