What Is a Risk Model?
Key Takeaways
- Definition: A risk model is a mathematical model of uncertainty and its effect on objectives.
- Thinking Modes: Mathematics reflects “slow” deliberate thinking as opposed to impulsive “fast” thinking.
- Causal Understanding: Mathematical modeling refines narratives and causal explanations into predictive frameworks.
- Decision Relevance: Models should always be grounded in objectives and uncertainties, not abstract math for its own sake.
- Pitfalls: Quantitative risk analysis fails when models are applied without context, data, or clarity of purpose.
Deep Dive
In his latest article, Graeme Keith explores the foundations of risk modeling in his latest piece, tracing its roots from ancient mathematics to modern decision-making. He argues that models should begin with real-world problems, not abstract equations, and makes the case for why risk modeling must remain intelligible to decision makers.
Why Models Must Serve Decisions
A risk model is just a mathematical model of risk and risk is the effect of uncertainty on objectives, so a risk model is just a mathematical model involving uncertainty.
The word mathematics comes from the Ancient Greek máthēma (μάθημα), meaning “that which is learned,” as distinguished from that which we know by intuition. In modern terms, we might identify this broad, pre-Aristotelian sense of mathematics with Daniel Kahneman’s concept of “slow” thinking (deliberate, rational, learned), as distinguished from its limbic forerunner “fast” thinking (impulsive, emotional, intuitive).
When intuition fails, we must bring to bear what we have learned. This is mathematics in this ancient broad sense. But mathematics in its narrower modern sense is simply a refined form of this slow, deliberate, learned mode of thought. There is an uninterrupted continuum of experience from the simplest mental models with which we make sense of the world to the most abstruse inaccessible treatise of modern advanced mathematics.
A Hierarchy of Slow Thinking
When we think deliberately, “slowly,” about the world, we try to find patterns in the chaos: repetitions, similarities, commonalities. We corral these patterns into categories of things and we conjecture relationships between these categories.
At the next level, we start to bring a narrative structure to these relationships, that is we start to think about cause and effect. Causal relationships are the most interesting because understanding causal relations allows us to think about intervention. First, we describe the world, then we predict how it might unfold, then we try to predict how our interventions might make that development better for us.
The next level of refinement—what we might call science, though the broader Wissenschaft is closer—is more carefully to define our categories and to formalize our conjectures about the relationships between them.
This allows us to test and compare paradigms: Do they make sense? Are they internally consistent and consistent with other paradigms? Are they consistent with the available information? Are they insightful?
Mathematical modeling in the modern sense is the final refinement in this hierarchy of slow thinking. It’s the part where we conjecture mathematical relationships between things we can measure and assign quantitative values.
The benefits of this are substantial, but they come at a price: the language of those relationships is now harder to understand and understanding may be limited to a set of practitioners who “speak math.”
But in the same way, as mathematical modeling is just refined, slow thinking, the language of “math” is just a highly refined and efficient language for describing relationships in a model. It ought always to be possible to unpack that language and explain it to anyone. The process of building, testing, and using mathematical models should be intelligible and accessible to everyone, because everyone does it all the time at some level, at least whenever they think.
Putting the Mathematical Cart Before the Modeling Horse
“I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be.”
—William Thomson, 1st Baron Kelvin (1824–1907)
Many people imagine Kelvin’s “knowledge” to be unavailable to them because they imagine it to be a consequence of the mathematical articulation of concepts. In fact, it is a condition for that articulation, and as such, it is entirely accessible even to those who are not fluent in the occult technicalities of the language of mathematics.
Mathematicians are, I think, entirely to blame for this misunderstanding.
I advocate an approach to modeling that takes its point of departure in the problems we’re trying to solve: identifying objectives, the uncertainties that influence those objectives, and the interventions that influence those uncertainties or the results of their outcomes. Carefully delineating these categories so that they are aligned with each other and with data and with the resources we have available to analyze data and execute decisions. This is the hard work of modeling, but it is also immensely insightful and therefore immensely valuable.
This is modeling, but it's not yet mathematics. So it is also a process that both can and ought involve everyone with a stake in the model and the problem it's built to address. Decision makers must own the models on which they base their decisions; they do not necessarily need to own the mathematics—the final step of converting the model into equations and solving them.
But all too often we see the opposite of this approach as a consequence of laziness or an unwillingness to engage with the real world. We see mathematical methods casting about for something to which to apply themselves; we see mathematical complexity for the sake of complexity, unsupported by available data, furnishing adjustments far beneath the noise floor of the fundamental uncertainties in the model and providing no additional insight; we see superfluous ontology—meaningless categories that can’t be defined, much less measured and are there purely to get the math to work; we see methods that make unrealistic demands on data or on the resources of analysts and we see models that inform decisions they can’t influence and fail to inform decisions they were built to illuminate.
This is the enemy of good quantitative risk analysis; not qualitative risk analysis but poor quantitative risk analysis: quant for the sake of quant with no context and no clear idea of the problem the quant is trying to solve.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.