There is a significant increase in the use of simulation models to underpin economic cost-benefit analyses of medical drug development. These models need to be valid and accepted by stakeholders, but guidelines thus far are sparse to non-existing. The goal of this project is to set up guidelines for model validation and acceptance.
ZonMW project: Disease models used for decisions on expensive drugs: a new instrument to enable structured model assessment.
About one third of the drug reimbursement applications in the Netherlands claim more effectiveness than existing treatments and therefore require a cost-effectiveness analysis (CEA) as part of the reimbursement dossier. The fast majority of such applications use a health economic decision model (HE model), in the form of a computer simulation model. More generally, many cost effectiveness studies use models to combine information from different sources and/or to extrapolate from a specific study to the setting that is relevant for the decision at hand.
An obvious requirement for model-based CEAs to be used in (reimbursement) decisions is that the models are valid. Guidelines exist on model validation, but have limitations. First, they often present ideals rather than feasible acceptability criteria. Second, guidelines are written to support model development, not model judgment. Finally, the recommendations are necessarily general and not geared towards validating HE models. In CEA practice, modelers have to balance rigor with feasibility; validation information is needed to support expert review of finished models; and the major interest is the validity regarding CEA outcomes.
The current proposal enhances the possibilities for decision makers and their advisors to transparently and consistently evaluate model based CEA results by developing a model assessment tool.
The tool consists of a checklist of validation tests relevant for HE models and an assessment of stakeholder perception. The checklist asks for performance criteria or opt out explanation and is to be filled out by the model developers to help reviewers assess the model, while stakeholder perception informs further appraisal of the model and its outcomes. The research strategy is to have a Delphi panel selection of validation tests from a broad inventory based on guidelines from several disciplines. Stakeholder perception assessment is to be developed adapting existing user experience tools. The entire tool will be tested on example models and discussed in a workshop to improve it further. The project group contains expertise on HE models, quality assessment of models, the statistics of validation tests, and the use of expert opinion. Furthermore, several members have practical experience with reimbursement and/or involving stakeholders, to ensure that the project can be successfully performed.
The new tool improves the model review process by providing a clear overview of validation efforts performed and their results in relation to the model’s aim, that is, to support a decision regarding cost effectiveness.