Flood risk and a rolling dice
Authors

Athanasios Liaskonis
View bioFlooding is one of the most impactful natural disasters across the globe, frequently appearing in news headlines. Globally, about 1.81 billion people (23% of global population) are at risk of flooding1. In England and Wales one in six properties are at risk of flooding, based on the latest National Flood Risk Assessment2. Even the Netherlands, a pioneer country for its flood protection measures and its significant investment in flood defences is still vulnerable to flooding, as demonstrated by the 2021 summer floods3.
Scary statistics and terms such as ‘flood risk’ and ‘100 year flood’ are thrown around frequently , but do we really understand what they mean?
As a civil engineer, I often hear these words used in the wrong context. One popular misconception is that the 1 in 100 years flood occurs once every 100 years. The correct statement is that it occurs on average once every 100 years. If you are more confused now, it’s alright, statistics are confusing at first sight. Statistics, although not a particularly favourite topic for most, are an important tool to inform our future decisions based on our past observations. I do believe that everyone should have a basic understanding and use the flood risk related terms correctly; whether you’re in a technical conversation or asking your estate agent if your new property is at risk of flooding.
This inspired me to write this article. In the following paragraphs, we’ll explore and answer the following three questions:
- What is flood risk?
- What is the “100 year flood” and how likely is it?
- How do we manage it in practice?
What is flood risk?
To answer what is flood risk, I’ll refer to the definition of disaster risk from the United Nations Disaster Risk Reduction4:
“The potential loss of life, injury, or destroyed or damaged assets which could occur to a system, society or a community in a specific period of time, determined probabilistically as a function of hazard, exposure, vulnerability and capacity.”
In the context of flood risk, we consider the hazard to be flooding; the exposure to be the likelihood of being flooded; the vulnerability to be the impact of flooding; and the capacity to be the ability to cope and recover fast from present day and future events.
Each component can be expressed qualitatively or quantitatively depending on the availability of empirical or numerical data. The exposure is usually quantifiable through statistical analysis of hydrometric data and hydraulic modelling to produce flood extent maps. The vulnerability and capacity are typically assessed qualitatively based on empirical knowledge and logical arguments. It is worth noting that the vulnerability and the capacity can also be quantified, but this requires substantial effort and time to gather the necessary data, which translate to higher costs. For simple developments, this adds marginal benefits over the qualitative approach. As such, it is mostly reserved for specific projects, like critical national infrastructure.
When considering flood risk, there are three important things to remember:
- Where the hazard is present, the risk is never zero.
- We design based on an acceptable risk.
- We employ secondary and tertiary measures to manage the residual risk.
We assess flood risk to inform an action. For example, to build or not to build in an area, and if we choose to build how do we build it to ensure it remains safe throughout the design life of the project without putting people at risk or damaging the environment.
It all depends on judgment, what we perceive as an acceptable or tolerable level of risk which varies from country to country and depends on local social and economic factors. An example is England’s National Planning Policy Framework which determines the acceptable risk for new developments with reference to a flood map for planning5, a development vulnerability classification system6 and a development incompatibility matrix7.
Any risk above that level, the residual risk, is acknowledged and managed through additional measures, such as, allowing for safety factors, raising floor levels above the design flood level (freeboard), or employing early warning, emergency management and evacuation plans.
What is the “100 year flood”?
Now let's focus on the exposure component and especially how the likelihood of flooding is quantified. This will enable us to understand what the “100 year flood” is. The likelihood is determined using two statistical analyses, the frequency analysis on available hydrometric data and the binomial distribution.
A frequency analysis helps us to estimate the frequency of different magnitude events. As a statistical process, it relies heavily on the availability of good quality long-record data. The more data we have the higher the confidence in our predictions. Generally, countries with an established monitoring network and frequent rainfall have enough data to perform such analyses. But this is a critical issue in arid regions, like the middle east, where despite the presence of monitoring stations, rainfall and flood events are infrequent.
How much data is typically required to estimate the “100 year flood”? Kjeldsen and Jones analysis8 indicated 500 site-years as a recommended target, which is, for example, the equivalent of 10 hydrologically similar stations with 50-years record each. When enough data is available, a frequency analysis can be performed to produce frequency curves of different hydrometric data, like peak rainfall intensity or peak river flows.
The average frequency is typically expressed in return period years, or the respective annual exceedance probability (AEP), which is the likelihood of an event of equal or greater magnitude to occur in a year. These should be considered two sides of the same coin, as both describe the same attribute using different words.
As such, a 100 year return period event – the “100 year flood” – has at the same time a 1 in 100 chance of happening in any given year, being this year or the next year.
Generally, people live on average more than a single year and we need our infrastructure to be operational for many years, the design life of a system. In this case, the annual chance of flooding is not enough, we must understand the cumulative chance throughout the design life. For this, we employ the binomial distribution.
This is a mathematical technique that allows us to quantify the likelihood of an event occurring or not occurring (hence binomial) in subsequent trials of a test. In the context of flood risk, it enables us to quantify the likelihood of occurrence or no occurrence of a certain magnitude flood event throughout the design life of a system.
Likelihood and a rolling dice
To explain how this works, I’ll utilise the example of a fair six-sided rolling dice. Let us pick the sixth side, as a test roll the dice multiple times – consider the test to be a success if the number six lands on top at least once (one or more times). Given our dice is fair, every side has a one in six chance of landing top in every single roll of the dice. The likelihood (L) of success in successive rolls, depends on the probability of occurring in a single roll, P[X]=1/6 and the probability of non-occurring in a single roll, P[notX]=1-1/6=5/6 and is expressed mathematically as:
L = 1 – [P(notX)]n = 1 – [1 – P(X)]n
Using this formula, we can calculate that the chance of success after one roll is 16.67% (one in six), after two rolls is 30.56%, after three rolls 42.34% and so on. It is apparent the more times we roll the dice the higher the chances of the sixth side to land on top.
The same principle applies to flood probabilities with one roll representing a calendar year and the likelihood of success representing the possibility of a target flood to occur in that year. Therefore, a six sided dice can be used to estimate the likelihood of occurrence of a six year return period event, a ten sided dice the likelihood of a ten year return period event and a 100 sided dice the likelihood of a 100 year return period event.
As mentioned, we don’t design for all possible outcomes but for an acceptable risk. We select a design event to size our primary flood defences and consider secondary and tertiary measures to manage the residual risk and the consequences of greater magnitude events. Thus, we are interested in the probability of exceedance of the design event, or when the expected consequences are severe – the risk of exceedance, R. To calculate this risk, we can re-write the above mathematical formula with reference to the likelihood of the design event (AEP) and the design life of the project (n years).
R = 1 – [1-AEP]n
Let us now assume we design a flood defence for the “100-year flood” (one per cent AEP) with an expected design life of “n” years. To estimate the risk of exceedance, R, we need to pick a 100 sided dice with the 100th side representing the design event and roll it “n” times. In the convenience of the maths, we can estimate that the risk of exceedance of the “100 year flood” in the next year is one per cent, in the next ten years is 9.56% , and in the next 100 years is 63.40% !
Next time you think it’s unlikely for the “100-year flood” to occur, it may be true for a single year, but it’s very likely to occur at least once in the lifetime of the project.
How do we manage it in practice?
If it is that likely for an industry standard design event to occur or being exceeded during the design life of most projects, why not design for higher magnitude, less frequent events? It is a question I raised at a roundtable earlier this year9.
The prime reason is cost. To design for less frequent events is prohibitive for most developments and only specific developments such as dams, or nuclear power plants can justify it. A secondary reason is uncertainty. Frequency analyses require a lengthy record and empirical judgement to reduce the uncertainty of our predictions, an uncertainty that rises dramatically for less frequent events.
Seemingly, that would be a good idea, but it would not be a sustainable one. Substantial funds and natural resources would be required to design and build against a standard whose accuracy is highly uncertain. Instead, it would be more reasonable to avoid the hazardous zones, design against a lower magnitude higher confidence standard, and allow for future adaptation measures.
Developing flexible designs and embedding capacity in our systems are essential to efficiently manage the residual risk. This strategy can ensure our cities and communities are resilient and capable of adapting to the uncertain future climatic conditions without depleting our current resources.
To conclude, the paragraphs above explain what flood risk is and its probabilistic nature, how we estimate the likelihood of a flood event and the inherent uncertainty of our predictions. It clarified what the “100 year flood” is and that it is very likely to occur at least once, and is not improbable twice, in a 100 year period. The physical limits of our natural resources and the uncertainty of our predictions suggest that it is best to avoid the hazard, or, if the development is necessary, to develop flexible designs that have adequate capacity for adaptation to higher design standards in the future when the uncertainty levels are lower. This reiterates the importance of managing carefully and effectively the residual flood risks.
As a final remark, it should be remembered that if a flood hazard is present the flood risk is never zero. Flood risk and associated costs are very much like rolling a dice. The more throws of the dice, the more likely the anticipated event is to occur, but can you afford the costs of the gamble; in many instances it is best to just remove oneself from the table.
References
3 Strijker, Bart. 2021. “The 2021 Floods in the Netherlands: Datasets.” Data.4tu.nl, October.
5 GOV.UK. 2019. “Flood Map for Planning - GOV.UK.” Service.gov.uk. 2025.
7 GOV.UK. 2022. “Flood Risk and Coastal Change.” GOV.UK. August 22, 2022.
9 “Flood Resilience: The Journey from Unprecedented to Prepared.” 2025. Fathom. February 13, 2025.