Comparison of EHR-based diagnosis documentation locations to a gold standard for risk stratification in patients with multiple chronic conditions

Shelby Martin, Jesse Wagner, Nicoleta Lupulescu-Mann, Katrina Ramsey, Aaron Cohen, Peter Graven, Nicole Weiskopf, David Dorr

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Objective: To measure variation among four different Electronic Health Record (EHR) system documentation locations versus ‘gold standard’ manual chart review for risk stratification in patients with multiple chronic illnesses. Methods: Adults seen in primary care with EHR evidence of at least one of 13 conditions were included. EHRs were manually reviewed to determine presence of active diagnoses, and risk scores were calculated using three different methodologies and five EHR documentation locations. Claims data were used to assess cost and utilization for the following year. Descriptive and diagnostic statistics were calculated for each EHR location. Criterion validity testing compared the gold standard verified diagnoses versus other EHR locations and risk scores in predicting future cost and utilization. Results: Nine hundred patients had 2,179 probable diagnoses. About 70% of the diagnoses from the EHR were verified by gold standard. For a subset of patients having baseline and prediction year data (n=750), modeling showed that the gold standard was the best predictor of outcomes on average for a subset of patients that had these data. However, combining all data sources together had nearly equivalent performance for prediction as the gold standard. Conclusions: EHR data locations were inaccurate 30% of the time, leading to improvement in overall modeling from a gold standard from chart review for individual diagnoses. However, the impact on identification of the highest risk patients was minor, and combining data from different EHR locations was equivalent to gold standard performance. The reviewer’s ability to identify a diagnosis as correct was influenced by a variety of factors, including completeness, temporality, and perceived accuracy of chart data.

Original languageEnglish (US)
Pages (from-to)794-809
Number of pages16
JournalApplied Clinical Informatics
Volume8
Issue number3
DOIs
StatePublished - Aug 2 2017

Fingerprint

Electronic Health Records
Documentation
Health
System program documentation
Costs and Cost Analysis
Multiple Chronic Conditions
Information Storage and Retrieval
Set theory
Costs
Primary Health Care
Statistics
Testing

Keywords

  • Data Quality
  • Forecasting
  • Health Information Systems
  • Multiple Chronic Conditions
  • Risk Stratification

ASJC Scopus subject areas

  • Computer Science Applications
  • Health Informatics
  • Health Information Management

Cite this

Comparison of EHR-based diagnosis documentation locations to a gold standard for risk stratification in patients with multiple chronic conditions. / Martin, Shelby; Wagner, Jesse; Lupulescu-Mann, Nicoleta; Ramsey, Katrina; Cohen, Aaron; Graven, Peter; Weiskopf, Nicole; Dorr, David.

In: Applied Clinical Informatics, Vol. 8, No. 3, 02.08.2017, p. 794-809.

Research output: Contribution to journalArticle

@article{6b8f79cfcfa44f0299727faf15152432,
title = "Comparison of EHR-based diagnosis documentation locations to a gold standard for risk stratification in patients with multiple chronic conditions",
abstract = "Objective: To measure variation among four different Electronic Health Record (EHR) system documentation locations versus ‘gold standard’ manual chart review for risk stratification in patients with multiple chronic illnesses. Methods: Adults seen in primary care with EHR evidence of at least one of 13 conditions were included. EHRs were manually reviewed to determine presence of active diagnoses, and risk scores were calculated using three different methodologies and five EHR documentation locations. Claims data were used to assess cost and utilization for the following year. Descriptive and diagnostic statistics were calculated for each EHR location. Criterion validity testing compared the gold standard verified diagnoses versus other EHR locations and risk scores in predicting future cost and utilization. Results: Nine hundred patients had 2,179 probable diagnoses. About 70{\%} of the diagnoses from the EHR were verified by gold standard. For a subset of patients having baseline and prediction year data (n=750), modeling showed that the gold standard was the best predictor of outcomes on average for a subset of patients that had these data. However, combining all data sources together had nearly equivalent performance for prediction as the gold standard. Conclusions: EHR data locations were inaccurate 30{\%} of the time, leading to improvement in overall modeling from a gold standard from chart review for individual diagnoses. However, the impact on identification of the highest risk patients was minor, and combining data from different EHR locations was equivalent to gold standard performance. The reviewer’s ability to identify a diagnosis as correct was influenced by a variety of factors, including completeness, temporality, and perceived accuracy of chart data.",
keywords = "Data Quality, Forecasting, Health Information Systems, Multiple Chronic Conditions, Risk Stratification",
author = "Shelby Martin and Jesse Wagner and Nicoleta Lupulescu-Mann and Katrina Ramsey and Aaron Cohen and Peter Graven and Nicole Weiskopf and David Dorr",
year = "2017",
month = "8",
day = "2",
doi = "10.4338/ACI-2016-12-RA-0210",
language = "English (US)",
volume = "8",
pages = "794--809",
journal = "Applied Clinical Informatics",
issn = "1869-0327",
publisher = "Schattauer GmbH",
number = "3",

}

TY - JOUR

T1 - Comparison of EHR-based diagnosis documentation locations to a gold standard for risk stratification in patients with multiple chronic conditions

AU - Martin, Shelby

AU - Wagner, Jesse

AU - Lupulescu-Mann, Nicoleta

AU - Ramsey, Katrina

AU - Cohen, Aaron

AU - Graven, Peter

AU - Weiskopf, Nicole

AU - Dorr, David

PY - 2017/8/2

Y1 - 2017/8/2

N2 - Objective: To measure variation among four different Electronic Health Record (EHR) system documentation locations versus ‘gold standard’ manual chart review for risk stratification in patients with multiple chronic illnesses. Methods: Adults seen in primary care with EHR evidence of at least one of 13 conditions were included. EHRs were manually reviewed to determine presence of active diagnoses, and risk scores were calculated using three different methodologies and five EHR documentation locations. Claims data were used to assess cost and utilization for the following year. Descriptive and diagnostic statistics were calculated for each EHR location. Criterion validity testing compared the gold standard verified diagnoses versus other EHR locations and risk scores in predicting future cost and utilization. Results: Nine hundred patients had 2,179 probable diagnoses. About 70% of the diagnoses from the EHR were verified by gold standard. For a subset of patients having baseline and prediction year data (n=750), modeling showed that the gold standard was the best predictor of outcomes on average for a subset of patients that had these data. However, combining all data sources together had nearly equivalent performance for prediction as the gold standard. Conclusions: EHR data locations were inaccurate 30% of the time, leading to improvement in overall modeling from a gold standard from chart review for individual diagnoses. However, the impact on identification of the highest risk patients was minor, and combining data from different EHR locations was equivalent to gold standard performance. The reviewer’s ability to identify a diagnosis as correct was influenced by a variety of factors, including completeness, temporality, and perceived accuracy of chart data.

AB - Objective: To measure variation among four different Electronic Health Record (EHR) system documentation locations versus ‘gold standard’ manual chart review for risk stratification in patients with multiple chronic illnesses. Methods: Adults seen in primary care with EHR evidence of at least one of 13 conditions were included. EHRs were manually reviewed to determine presence of active diagnoses, and risk scores were calculated using three different methodologies and five EHR documentation locations. Claims data were used to assess cost and utilization for the following year. Descriptive and diagnostic statistics were calculated for each EHR location. Criterion validity testing compared the gold standard verified diagnoses versus other EHR locations and risk scores in predicting future cost and utilization. Results: Nine hundred patients had 2,179 probable diagnoses. About 70% of the diagnoses from the EHR were verified by gold standard. For a subset of patients having baseline and prediction year data (n=750), modeling showed that the gold standard was the best predictor of outcomes on average for a subset of patients that had these data. However, combining all data sources together had nearly equivalent performance for prediction as the gold standard. Conclusions: EHR data locations were inaccurate 30% of the time, leading to improvement in overall modeling from a gold standard from chart review for individual diagnoses. However, the impact on identification of the highest risk patients was minor, and combining data from different EHR locations was equivalent to gold standard performance. The reviewer’s ability to identify a diagnosis as correct was influenced by a variety of factors, including completeness, temporality, and perceived accuracy of chart data.

KW - Data Quality

KW - Forecasting

KW - Health Information Systems

KW - Multiple Chronic Conditions

KW - Risk Stratification

UR - http://www.scopus.com/inward/record.url?scp=85026825906&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85026825906&partnerID=8YFLogxK

U2 - 10.4338/ACI-2016-12-RA-0210

DO - 10.4338/ACI-2016-12-RA-0210

M3 - Article

VL - 8

SP - 794

EP - 809

JO - Applied Clinical Informatics

JF - Applied Clinical Informatics

SN - 1869-0327

IS - 3

ER -