Towards a Model for Quantitative Privacy Impact Assessment

At its heart, security risk assessment is a very simple business:

  1. You list all of the risks you can think of
  2. For each risk:
    1. You assign a probability to that risk
    2. You assign an impact, usually a financial impact, to the risk happening
    3. You multiply the impact by the probability

You then order the risks by the weighted impact and deal with them in that order. Using this technique, three types of risks bubble to the top:

  1. Risks that are very likely and also have at least a moderate impact if they do happen.
  2. Risks that are very unlikely but have a huge impact if they happen.
  3. Risks that are moderately likely and have moderate impact if they do happen.

This information is then used by risk managers who can use the weighted impact figure as an indication of the benefit of addressing a particular risk – one part of a cost-benefit analysis - the cost in that case being the cost of implementing a risk reduction strategy. Residual risk (the risk that is left over after you have implemented the risk reduction strategy) can also be factored in to these calculations. Of course, the devil is in the detail and the practicalities of performing these calculations are often much more difficult than the simple summary given here might suggest. For example:

  • You can only account for risks that you can anticipate. What about the “unknown unknowns” that are not taken into account?
  • It can be extremely difficult to come up with an accurate assessment of the probability of a particular risk occurring. Often this is a very subjective process because even when accurate historical data is available, translating that data to the probability of a risk impacting a specific organisation is problematic.
  • The calculation of the impact of a risk can be very subjective. In particular, should the indirect impact of the risk be taken into account (e.g. the time required by staff members to clean up after an incident) and if so, it is not always clear where the line should be drawn between costs that are business-as-usual and costs that are a result of the incident.

In any case, this is the most frequently used model for security risk assessment and it occurred to me that there might be some benefit in considering how a similar model could be applied in the case of a data privacy impact assessment. The purpose of this article is to describe some of my preliminary thoughts on construction of a model for quantitative data privacy impact assessment. I intend, in due course, to apply this model to the possible solutions to the Carrier-Grade NAT information gap.

In general, the aim of the model is to consider a scenario where there are multiple possible courses of action, each of which has a potential data privacy impact. How can the various courses of action be assigned either an absolute or relative quantitative privacy impact “score”? In the first instance, the model is broadly the same as the risk assessment model described above wherein all of the possible courses of action are listed, and figures analogous to probability and impact are assigned to each one. The combination of these two factors can then be used to compare the courses of action.

Calculating “probability”

Considering first the probability component of the risk assessment model. The probability that is of interest here is the probability that the course of action under consideration will lead to a privacy breach. Many of the equivalent issues of calculating probability in risk assessment situations are likely to occur. However, there are a number of interesting factors that can come into play, particularly in cases where relative risk assessments are being carried out:

  • A risk that involves a large chain of control failures is, generally speaking, less likely than a risk that involves a single control failure. This observation gives rise to the security principle of “defence in depth”.
  • Two risks that both involve the same large chain of control failures, with one risk involving a single extra control failure are only differentiated by the probability of the single extra control failure. For example:
    • Risk A requires failure of controls a, b and c.
    • Risk B requires failure of controls a, b, c and d.
    • Unless control d fails with 100% certainty when the other three controls fail, it is possible to observe that:
      • Risk B is less likely than Risk A to occur.
      • The only thing that differentiates the two risks is the probability of failure of control d.
    • The deployment of novel technologies is inherently more risky than the deployment of long-established technologies. There are much less likely to be “unknown unknowns” arising with long-established technologies. Of course it’s possible that “unknown unknowns” could arise in any technology, otherwise there would be no such thing as a zero-day vulnerability.

In the context of privacy impact assessment, it is fortunate that the assessment is usually being performed to assess the privacy impact of several candidate courses of action and therefore the probability and impact only need to be calculated for a relatively small number of scenarios. In such cases it may be possible to assign a meaningful relative probability even in cases where a calculation of absolute probability is not possible.

Calculating a counterpart for “Impact”

Moving on now to the analogy for impact in the risk assessment model.

Different courses of action may have different impact characteristics in case a breach takes place. Factors that could influence the impact would include:

  • The type of data compromised in the breach – A course of action where a breach could expose sensitive personal data would have a higher impact than a course of action that would only expose non-sensitive personal data.
  • The volume of data compromised in the breach - A course of action where a breach could expose the data of a larger number of individual data subjects would have a higher impact than a course of action that would expose the same data of a lesser number of data subjects.

Quantifying the impact may be a bit trickier than it appears at first glance. In simple cases the number of subjects whose data is impacted by the breach could work. However this will not always work, such as in the following example scenarios:

  • One breach leads to the exposure of non-sensitive categories of personal data for a large number of data subjects. Another breach leads to the exposure of sensitive personal data of a small number of data subjects. How is the impact of these two scenarios to be compared? Obviously number of data subjects is not adequate. It is necessary to assign a relative quantitative measure to the sensitivity of the data being lost.
  • One breach leads to the exposure of a small amount of data about a large number of data subjects. Another breach leads to the exposure of a large amount of data about a small number of data subjects. Again, the likely impact on the privacy of the individual data subjects needs to be taken into account.

One final thought that should be considered is a scenario where a breach leads to the exposure of a small amount of data about data subjects for which organisation A (carrying out the privacy impact assessment) but a large amount of data about data subjects to which organisation A has no obligation? Can this be taken into account somehow? Should it be? Generally speaking, when conducting a risk assessment a line needs to be drawn somewhere otherwise it would never be possible to complete all possible risks and in this case it would be reasonable of organisation A to consider only their own obligations with the data privacy implications of their actions factoring into the risk assessment of the other organisation(s) whose data is likely to be impacted by their actions.


In summary, it seems possible in principle at least, that a model for relative quantitative privacy impact assessment is possible. Such a model will include many of the same caveats and challenges that apply to current risk assessment models.

Leave a comment

Make sure you enter all the required information, indicated by an asterisk (*). HTML code is not allowed.

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site you are accepting the use of cookies in accordance with our privacy policy.
Privacy Policy Accept