How to Model Risk

How to Model Risk

By

Key Takeaways

  • Models Serve Decisions: The purpose of modeling is usefulness in decision-making, not perfect accuracy.
  • Framing Comes First: Clear objectives, stakeholders, scope, and context must be defined before building the model.
  • Causal Alignment Matters: Effective models link objectives, uncertainties, and decisions through causal reasoning, not just data correlations.
  • Collaboration Drives Adoption: Engaging decision-makers and subject matter experts ensures ownership and practical use of the model.
  • Math Comes Last: Quantification should follow after causal structure and alignment, not lead the modeling process.
Deep Dive

In this article, Graeme Keith explores what it really means to build a risk model that is genuinely useful in practice rather than simply mathematically impressive. He emphasizes that effective models must be embedded in real decision-making processes, aligned with clear objectives, and developed collaboratively with stakeholders. The focus is on modeling as a creative, iterative, and context-driven exercise that prioritizes understanding causal relationships and supporting informed action.

Building Models that Support Real Decisions
All models are wrong, some are useful.

Fine, but we do not build models to be right. We build models to be useful. Sure, fidelity is an important part of being useful, but primarily, to be useful, models need to be used. They need to influence decisions taken in the pursuit of goals. This is why the most important—and sadly often the most neglected—aspect of modeling is ensuring the model is embedded in a decision process and why good modeling is more about aligning objectives, uncertainties and decisions than about maximizing empirical fidelity.

Modeling is a creative endeavor, so modeling workflows are necessarily highly nonlinear, agile and iterative. But modeling ought also to be highly collaborative, so this must take place in a clearly structured framework, like the one illustrated above.

Framing is critical. Why are we building this model? What decisions is it going to support? Who cares about it? Who is taking the decisions the model is going to support? What is their background for leveraging the insight the model brings? What's in and what's out of the model? How often is the model to be run and by whom? What is their background? Who is providing data for the model? What is their background? How much time do they have to do that? Who is building the model? What resources do they have? When does it need to be ready?

All of these considerations influence the way we build the model and they must be aligned, with the model and with each other. There is no point in building a model if decision-makers won't engage with it, or if we can't provide the data for it. And we need to make sure the model provides the decision maker with the information she needs and does not provide her with information she doesn't.

Context and Assumptions

Context is both an important component in framing and an essential component in transitioning out of framing and into building a model. There are two apparently opposing processes in play here: on the one hand, articulating what is new and different, changing or emerging—trends, opportunities and threats, and on the other, surfacing the established, tacit, often hidden assumptions that underlie our received wisdom about the scope of the model. This is as much about aligning stakeholders and their beliefs as about enumerating the assumptions upon which the model is grounded and constrained.

Assumptions are critical to understanding the scope of applicability of a model, but in themselves they are also a powerful tool for identifying emerging risks. We make assumptions about the world—implicit and explicit—in order to speed up our processing so we can act faster. The weakening of those assumptions is the first sure sign of cracks in the the bedrock of our decision making paradigms - warnings that the future is no longer going to resemble the past in which our established habits have secured us success.

The Modeling Triad

Models are built between decisions and objectives and are connected to the world through data. This is a three-way dialogue (which is a big difference between modeling and data analytics, which is just a monologue from the data corner).

We often start with objectives—stochastic variables in whose outcomes we are invested (literally and metaphorically)—and work backwards. We identify the variables that influence our objectives and the variables that, in turn, influence them. Eventually this process arrives at uncertainties over which we have no control or decisions - exactly the things we can control.

We can also work the other way: start with decisions and interventions and work forwards through the things they influence to the objectives whose successful outcomes they are trying to obtain.

This process establishes causal connections between our objectives, the uncertainties that sway their outcome and the decisions with which we try to intervene to improve those outcomes. Causal models help us to understand how our choices propagate through causal chains to affect our outcomes. (This is another big difference from data analytics, which tends to be correlative and relational rather than causal.)

Connecting decisions to uncertainties and objectives and then inviting data into the dialogue establishes an appropriate level of granularity in the model. Data are chosen for their relevance to decisions and objectives and the amount of data is chosen to align with the level of granularity dictated by alignment between decisions and objectives.

Building a model in this way ensures the model is actionable. Decisions are built in and the level of granularity is chosen to support the decision. It also makes sure that the data you bring in to your model are the data you need to address the decisions you’re faced with to achieve the goals you want to achieve. (Again, another difference from data analytics which tends to force you to choose the questions you ask on the basis of the data you have available.) And it also ensures that models are as simple as they can be, but not simpler, because “not simpler” actually has a clear meaning when you have a decision to look through and an objective to achieve.

Making Models Matter

This process is agile and iterative, both at the level of model building, but also in continuously checking the alignment with framing and context and monitoring the assumptions, explicit and implicit we make in building the model. Many modeling workflows will have regular check-ins with decision makers and stakeholders to check assumptions and validate the inevitable refinement of context and assumptions.

This process is also collaborative. Apart from ensuring actionable, fit-for-purpose, pragmatic models, it has two additional, overwhelming advantages with respect to stakeholder engagement. First, by engaging subject matter experts and decision makers through the language of cause, information and influence, we capture the essential information we need to build the model without burdening our stakeholder relationships with talk of probability or mathematics. Human beings have an intuitive understanding of causation in a way they really don't understand uncertainty at all. If a stakeholder owns the causal model and understands where we are to get the data from to condition that model, that is usually more than enough to ensure sufficient sense of ownership to feel comfortable using the model to support the decisions she has to make.

To this end, this framework is specifically designed to import Decision Quality principles: Understand objectives. Explicate options and alternative for the decision levers included in the model. Explain and articulate the influence of those decisions on goals. And so on. These, too, are often built into modeling workflows.

Oh! And do Some Math

The single biggest blunder I see when I'm asked to mediate between models and the stakeholders that have commissioned them is that attention has been focused almost exclusively on mathematical detail, in maniacal pursuit of the aspiration to make the model as faithful and as detailed as possible regardless of the need for or relevance of that detail and completely without respect for the organizational or environmental context in which the model should be embedded.

I firmly believe that at least as far as the broader stakeholder community is concerned, this is the final step (and often left as an appendix), but as with the whole modeling process, this too is iterative. Naturally, it is important to consider how we will capture and condition the variables we identify in the foregoing and ongoing discussions, and—full disclosure—I will often start with some back-of-an-envelope brainstorming so as not to get caught with a magnificent causal framework that I can in no way meaningfully quantify.

Conclusion

As I discussed in my last article Why Model? the point of a risk model is to systematize your understanding of the uncertainties influencing your objectives in a way that allows you to leverage data to make better, more informed decisions. The way we build models—aligning decisions, uncertainties and objectives, together with the data we need to leverage—is designed exactly to achieve this, but this too must be in turn aligned with the practicalities of the making sure the model is actually used, which is a model's ultimate measure.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong