# Methods for quantifying the uncertainty in NHS reference costs (Part 2)

This is a follow-up post, continuing on from this post.

Funnel plots (or funnel control charts) are a popular way to explore outcomes from organisations of different sizes.

They make it possible to see how much variability can be explained by random chance (small sample errors) and how much by common heterogeneity.

Plotting the unit cost of colonoscopy from different organisations makes it clear that a funnel plot may aid in the interpretation.

I assumed that logarithm of the mean cost from each organisation would be normally distributed (unfortunately we are given the arithmetic mean rather than the geometric mean, so there is likely some bias) as follows: $Y_i \sim \mathcal{N}\left(\mu, \frac{\sigma^2}{n_i}+\tau^2\right)$

where $Y_i$ is the log(Unit cost) for organisation $i$.

Using maximum likelihood estimation for colonoscopy costs (FZ51Z, FZ52Z and FZ53Z) from 2014-15, I fitted the model, as shown in this funnel plot:

The STATA command I used was:

mlexp (-0.5*( ln(2*_pi*(exp(2*{lnsigma=1})/activity + exp(2*{lntau=-2}))) + (ln_unitcost - {mu})^2/(exp(2*{lnsigma})/activity + exp(2*{lntau})) ))

This produced the following parameter estimates:

Coef. Std. Err. 95% Conf. Interval $\mu$ 6.363 0.011 6.343 6.384 $\sigma$ 0.878 0.029 0.821 0.934 $\tau$ 0.492 0.011 0.471 0.513

We can now show the fitted model for the distribution of ‘true’ unit costs across organisations, $\ln{\mathcal{N}\left(\mu,\tau\right)}$:

The quartiles of this distribution are at £416, £580 and £809. These compare to the quartiles of the raw data (as used in the simple method described in the previous post), which are £401, £587 and £868. It is unsurprising that the interquartile range is greater in the raw data, since this includes the $\frac{\sigma^2}{n_i}$ source of variance.

Now that we have quantified the uncertainty due to heterogeneity between organisations, we can consider how this relates to the distribution we want to assign to the unit cost in a probabilistic sensitivity analysis.

We need to think from the perspective of the NHS. We are (usually) considering doing more of these procedures, and this has an opportunity cost. We don’t know where the additional procedures will be done, but we might start with the assumption that they will be distributed as they are currently.

As the number of new procedures increases, by the central limit theorem we should see that the average cost of the new procedures approaches the expected value of the distribution (assuming there is a constant marginal cost), which is $\exp\{\mu+\frac{\tau^2}{2}\}$.

So, perhaps we really want to look at the uncertainty in this, due to model fitting.

nlcom (mean: exp(_b[mu:_cons]+0.5*exp(2*_b[lntau:_cons])))

This gives us a mean cost of £655 (95% CI, £639 to £671), but this is significantly different from the £572 which is the actual sample mean. Calculating a median from the model fit gives £580 (95% CI, £568 to £592).

To investigate how much this is an artefact of the model over-fitting to low activity organisations, I removed organisations with fewer than 50 procedures. The sample mean lowered to £565 (not a great change), and the mean estimated from the model fit dropped significantly to £577 (95% CI, £559 to £595).

The table below summarises the estimates from alternative methods:

Method Mean 95% Conf. Interval
MLE £655 £639 £671
MLE excl. small £577 £559 £595
Simple £572 £568 £576
Simple excl. small £565 £561 £569

It can be seen that the model fitting method leads to higher mean estimates (undesirable) and significantly wider confidence interval estimates (possibly desirable).

### Conclusions

Alternative methods of quantifying the uncertainty due to heterogeneity can lead to different estimates. If the results of a decision model are sensitive to a particular unit cost, it is advised that care is taken to consider whether the standard approach gives an adequate estimation of the uncertainty.