In association with

  • Bristol, North Somerset and South Gloucestershire Integrated Care Board
  • West of England Academic Health Science Network
  • National Institute for Health Research

Planning an economic evaluation

After determining the approach for the evaluation there are several considerations to be made prior to the roll-out to ensure success.

Data collection methods

Large scale evaluations will require more robust data collection methods. This is to ensure that, firstly, reliable baseline data is available to be fully representative of the system without the intervention. Secondly, data collection during the intervention will need to be appropriate and reliable, involving all healthcare staff partaking in the pilot or trial. For smaller scale projects, it may be possible to use literature and academic sources where data is not available and there is not enough time to establish a new data collection method.


Large scale evaluations will require more time prior to ensure successful roll out of the intervention. This will involve engagement with a large number of stakeholders across providers and all clinical teams. During the evaluation, more time will be required to ensure consistency in data collection between different sites.


Ultimately, ensuring robust data collection methods are established and continued engagement with a large number of stakeholders will require additional time. Not only will the extended timeline of large-scale evaluations require a larger budget, but the additional considerations of a large-scale evaluation will be a key factor and should be considered in budgeting.

Potential risks

Retrospectively capturing data: This may lead to a lack of data points or small sample of data from the evaluation. Additionally, baseline data may not be available and would have to rely on historic data or a forecasting approach. Determining data collection methods significantly before the implementation of an intervention will ensure robust baseline data is captured, and that there is an appropriate data collection method established for during the intervention.

Capturing time savings: Evaluations will commonly look to ascertain the impact of an intervention on staff time and whether efficiencies develop, for example the workforce impact of delivering rehabilitation using a digital platform in place of face-to-face delivery. As such, capturing accurate time savings is essential. There may be difficulties in capturing true time savings on tasks, and will rely on accurate input from staff. Confirming prior to the evaluation whether clinical systems, for example EMIS or Rio, capture staff time will ensure that if current systems do not capture the required data then bespoke time trackers can be developed and distributed to the participating providers.

Generalising results: Evaluations may be focused on a certain pathway or practice with results that may not be applicable to the wider healthcare system. Considering in the initial planning of the evaluation whether the chosen pathway is representative of standard care will help to ascertain whether the findings will be able to be appropriately generalised for additional pathways or practices.

Accounting for uncertainty: There may be uncertainty in evaluations with data collection methods or overinflating benefits and underestimating costs. An optimism bias is applied to health economic analysis to account for any optimistic estimates. This adjustment reduces the benefits and increases the costs compared to figures achieved through calculations on raw data. The more reliable the data is, the lower the optimism bias. For example, costings based on local figures will carry a lower optimism bias that costings based on national figures.

Scarcity of resources

Increasing demand

Increasing number of intervention options

  • What interventions shall we commission?
  • How shall we implement them?
  • Who shall receive them?


Inform decision making by qualifying expected health benefits and costs; and the uncertainty around them

Top tips

  • Early engagement with all stakeholders
  • Early identification of data sources and whether data sharing agreements are required
  • Choice of evaluator (External/In house)
    1. In house evaluators will have local knowledge of the evaluation and the practices involved
    2. External evaluators will ensure impartiality to the evaluation, and bring expertise in methodology and establishing data collection methods/data sharing agreements
  • Ensure the overall methodological approach is appropriate
  • Identify all the important health effects and resource costs and include in the modelling
  • Acknowledge the limitations of the modelling and the approach taken and ensure the results of the analysis are interpreted appropriately.

Further advice & guidance

Visit the Evaluation Toolkit planning page for further advice and guidance on planning your evaluation.

This image has an empty alt attribute; its file name is Unity-Insights-Logo.png

The content of this page has been prepared by Unity Insights