Cancer modeling has become an accepted method for generating evidence about comparative effectiveness and cost-effectiveness of candidate cancer control policies across the continuum of care. Models of early detection policies require inputs concerning disease natural history and screening test performance, which are often subject to considerable uncertainty. Model validation against an external data source can increase confidence in the reliability of assumed or calibrated inputs. When a model fails to validate, this presents an opportunity to revise these inputs, thereby learning new information about disease natural history or diagnostic performance that could both enhance the model results and inform real-world practices. We discuss the conditions necessary for validly drawing conclusions about specific inputs such as diagnostic performance from model validation studies. Doing so requires being able to faithfully replicate the validation study in terms of its design and implementation and being alert to the problem of non-identifiability, which could lead to explanations for failure to validate other than those identified.
ASJC Scopus subject areas