Section 3 Conducting uncertainty analysis
Preparing to conduct the analysis
3.1 Plausible scenarios
Make sure that your outputs cover the full range of plausible outcomes
3.1.1 Ultimately our aim should be to communicate the overall uncertainty in our outputs, as this is what really matters to the end users. To do this we need to think about plausible ways that uncertainties might combine.
3.1.2 It’s tempting to simply take the ‘worst case’ for each uncertainty, plug those into your model, and use the answer as the overall ‘worst case’. But this would be too pessimistic and could lead customers to dismiss the results as unlikely. If each one of those uncertainties is at the limit of plausibility, it would be highly unlikely for them all to occur simultaneously by chance.
3.1.3 Our outputs should cover the full range of plausible outcomes, which have been signed off by stakeholders. So, if we’re looking at ‘worst case’ combinations, then we should also look at ‘best case’ combinations.
3.1.4 We need to consider correlations between variables and an appropriate technique. If one event happens, is another event more or less likely to happen? Are they purely independent? Some techniques are easier than others to take correlations into account (e.g. Monte Carlo analysis - see table 3.1). However, if correlations exist and are not taken into account, extreme values may be under or over estimated.
Correlation between variables needs to be accounted for
3.2 Testing outputs as part of quality assurance
It is best practice to test the outputs of the analysis before using/presenting
3.2.1 After modelling uncertainty, it is best practice to test the outputs of the analysis before sharing the results to avoid erroneous results and help better your understanding of your outputs, such as extreme or most likely values.
3.2.2 Uncertainty analysis may produce ‘extreme outcomes’, wherein implausible results or scenarios are given. These can be identified easily, through visualisation or filtering, and could indicate an issue with the setup conditions of your analysis.
3.2.3 Unusual results may also indicate a weakness in the use of the technique you have chosen. For example, if using the Monte Carlo technique, if there is unknown correlation which hasn’t been accounted for, or the incorrect distribution has been used for a parameter.
Unusual results may indicate a weakness in the use of the technique
3.2.4 One element to test in your analysis may be potential system shocks, such as a recession. Does your uncertainty analysis need to account for these? It may not always be useful to account for system shocks, depending on your analysis, and they may be better served in a risk register.
Conducting uncertainty analysis
Table 3.1: Common techniques for analysing uncertainty
Technique | Outline | Advantages | Disadvantages | Examples |
---|---|---|---|---|
Monte Carlo: Analyses large numbers of well understood uncertainties | Each source of uncertainty is assigned a distribution of their potential impact, which should be discussed and agreed with stakeholders where possible. Any interactions between these sources should also be modelled, as the results could be skewed. A single scenario is created by selecting values from these distributions and seeing what value the model would give under these conditions. This is repeated many times and the outputs can be analysed to assess the overall uncertainty. Be careful not to mix uncertainty and risks, as often you may want to have a distinction between how you handle uncertainty and risk. |
Allows for well understood uncertainties to be modelled in detail Enables analysis of the interactions between uncertainties Useful visual representation that customers often find helps understanding of uncertainty Can be used to assess the impact of acting to remove or reduce a source of uncertainty |
Highly dependent on the accuracy of the distributions of each uncertainty. Where this are not accurate, may give misleading results (spurious accuracy) Requires significant resource Can give misleading results if correlations are not properly accounted for |
Can help assess overall uncertainty when you have uncertainty around many aspects of your model Assess uncertainty around a fund forecast or an estimate for a policy costing Estimate the uncertainty around assumptions used in policy costings An example of how MoJ use Monte Carlo analysis to assess uncertainty is provided here: Placeholder for link to Monte Carlo Template @Risk is an Excel add-in to analyse risk using Monte Carlo. An example of which is provided here: Placeholder for DfE @Risk Example |
Combine Two Normally Distributed Uncertainties Time and effort can be saved when all uncertainty distributions are normal (or can be assumed to be normal). |
For two (or more) independent normally distributed uncertainties, you can produce a combined distribution by (1) summing the means to produce a mean of the combined distribution; (2) taking the square root of the sum of the variances to produce the standard deviation of the combined distribution. |
Simple and less resource intensive approach (relative to Monte Carlo) of combining uncertainties | User has to be sure the individual distributions are independent and normal | Particularly useful in the finance context, where uncertainty of overall spend/budget is essential to understand. |
Factor Analysis Reduces large numbers of correlated sources of uncertainty to a handful of underlying factors |
The sources of uncertainty are analysed and a smaller number of independent underlying factors are decided on - these should not be picked by the analyst in isolation, but agreed with relevant stakeholders. The sources of uncertainty should then be expressed as functions of underlying factors. Analysis is this done on how uncertainty in the factors drives the modelling outcome. |
Simplifies the sources of uncertainty Controls for interaction between the sources |
Removes detail and may miss factors Requires additional analysis of the factors |
Demographic analysis, where there are many correlated characteristics |
Scenario Testing Assess the impact of a few specific possibilities |
Rather than assessing the full range of outcomes, it may be appropriate to assess the impact of a few specific scenarios where there is insufficient knowledge on underlying uncertainty. These scenarios should be agreed with stakeholders to ensure they are realistic and provide a useful result. This analysis contains no information on how likely a scenario is to occur so care needs to be taken when communicating the results. For example, while a worst-case figure may be useful to provide an upper bound of costs, it should only be present alongside a best-case figure to give a full range of outcomes. |
Can produce a more detailed analysis of a smaller range of scenarios Can assess the impact of events with unknown probability e.g. system shocks Can produce best and worst case reasonable scenarios Use when you don’t know the range of uncertainty |
Choice of scenarios can be arbitrary and potentially misleading Contains no information around the likelihood of occurrence |
Forecasting where a range of policy options are being considered, particularly where the likelihood of an event occurring is unknown, for example early analysis on Brexit scenarios |
Judgement A subjective interpretation of the outputs (e.g. +/- 10% of output) |
Where there is too little information or time to do a quantified analysis, it may be better to provide a judgement on the uncertainty than nothing at all. Wherever possible, this should be given a quantified value even if this is decided subjectively, as descriptive terms may be interpreted very differently by different people. If no figure can be given, a RAG rating may be an alternative that removes some of the ambiguity. However the uncertainty is measured, make sure that it is clear that it is a subjective opinion rather than results of analysis to prevent it being misused. |
Can be produced very quickly Requires little to no data |
Highly subjective Provides no information around the sources of uncertainty |
Providing context around a high priority figure that needs to be submitted quickly. Analysis based on a data source of unknown reliability Analysis where the expected range of results would lead to the same outcome |
Dominant Uncertainty If one uncertainty has a much greater impact than all the others |
If you have a dominant uncertainty, then the uncertainty due to this one factor is a reasonable proxy for the overall uncertainty. You can test the impact of uncertainty on outputs by varying the inputs and understanding the robustness of your assumptions. If the dominant uncertainty can be quantified, then the outputs of this stage may simply be a sensitivity analysis. Conducting dominant uncertainty analysis may underestimate the overall uncertainty. However, when time is very tight this may be a favourable proportionate response. |
Only the main uncertainty needs to be tested, which saves time and resource By focusing on one input, it may focus attention on the main uncertainty and lead to additional resource understanding and reducing the uncertainty, having a favourable impact overall |
May underestimate overall uncertainty as other factors excluded | The Accuracy Tracking Tool link to DfE Accuracy Tracking tool can be used to estimate the residual uncertainty once the dominant has been modelled. This tool assesses the accuracy of different forecast elements and allows you to see the percentage which both the dominant and residual uncertainty contribute to the total error. |
Break-Even Analysis Useful to understand the point at which a saving becomes a cost |
Even in complex models with interdependencies, it may be helpful to take a step back and think about the critical inputs that affect the outputs of a model. Other techniques give a range of outputs. Break-even analysis works backwards - if we were to break even, what would the input be? We would consider how much the input has to change before we break -even and the probability of this occurring |
Gets customers to think about the assumptions used in the modelling and helps their understanding of the critical break-even point Simple to conduct |
Calculating how far the take up rate of a policy can fall before the savings become a cost. For example, you might have a policy with a £5m benefit, and a range of £5m cost to £15m benefit. Overall, this might look appealing. But if the take up rate is assumed to be 50% and break-even analysis shows it must be 30% for a benefit, assumption owners will have to think about how likely that is. | |
Dealing with Longer Term Uncertainty If long term uncertainty is unknown it is possible to extrapolate future uncertainty (particularly if short/medium term uncertainty is known) |
For a known medium-term uncertainty distribution, it is possible to extrapolate long term standard deviation by using the rate of change over the short to medium term. This will provide an analytically robust estimation of future uncertainty but may not be a true reflection of long term uncertainty. Assumption: Long term uncertainty could be assumed to be unchanged from medium term uncertainty. This is useful in situations when long term action won’t allow large deviations, for example, long term forecasts of inflation are unlikely to have more uncertainty than medium term forecasts as the BofE (and the Government) would take measures to control its level |
A method for extrapolating into the longer term | If the distribution of long term uncertainty is known, this is usually preferable - even if this is unmanageable, this is an accurate reflection of output confidence and resulting action should be a reflection of this Should consider whether the extrapolation is a true reflection of uture uncertainty and whether this level of ever increasing uncertainty becomes unmanageable and irrelevant after a certain point? |
When needing to use inflation measures |