Why Model Risk?

Why Model Risk?

By
Key Takeaways
  • Modeling Over Models: The true value lies not in the model itself but in the act of modeling, clarifying assumptions, objectives, and decision levers.
  • Clarity Through Mathematics: Mathematization forces logical consistency and precision, making models a tool for better thinking rather than just number-crunching.
  • Quantification Enables Accountability: Assigning numbers to outcomes allows for comparison, preference, and empirical testing, making decision-making more disciplined.
  • Usefulness Over Accuracy: As George Box’s quote suggests, the goal isn’t to be “right” but to be useful, models help structure uncertainty for better decisions.
  • Modeling as a Thinking Process: You don’t need deep mathematical expertise to model risk effectively, only the willingness to think clearly and systematically.
Deep Dive

In this article, Graeme Keith explores the deeper purpose of risk modeling—not as a mathematical exercise in prediction, but as a disciplined way of thinking. Drawing parallels from military planning to decision science, Keith examines why the act of modeling itself often yields greater value than the models it produces. Through reflections on clarity, logic, and the pursuit of usefulness over perfection, he argues that modeling is as much about understanding uncertainty as it is about managing it.

The Value of Modeling Risk Lies in the Thinking, Not the Math

"Models are nothing. Modeling is everything."

I stole this quote from Sam Savage, who stole it from Eisenhower’s 1957 “Plans are worthless, but planning is everything.” Eisenhower was telling an anecdote about how, at the start of the century, maps of the Alsace-Lorraine area of Europe were used for U.S. military training but were replaced after WWI because the location was not deemed relevant to American forces. (The irony, of course, being that was exactly where American soldiers were deployed in WWII.) Eisenhower’s point was that although the specifics of that training were indeed rendered worthless by dumb fate, the training itself—the underlying principles and methods—were none the worse for being worked out at a more “relevant” location in the U.S.

Eisenhower was talking about military planning. Quoting Churchill, “In battles ... the other fellow interferes all the time and keeps upsetting things...”

Churchill goes on to say, “...the best generals are those who arrive at the results of planning, without being tied to plans.”

Modeling risk is not so different. Much risk modeling takes place in competitive, if not actually combative, environments. The other fellow interferes all the time. And if it’s not the other fellow, then our ignorance of the true state of the world in a rapidly changing environmental context is usually enough to upset things all on its own.

So why model? Because like Churchill’s generals and Eisenhower’s field officers, we recognize that there is enormous value in the understanding and insight we derive from doing the hard work of modeling—articulating what we’re trying to achieve and what influences that success, and enumerating the decision levers we can pull to intervene in what influences success. And every now and then, when the future we’re trying to map out sufficiently resembles the past from which we are trying to triangulate—statistically at least—then models aren’t completely worthless. At least while we’re not at war.

Mathematical modeling draws its power from two sources: quantification—the assignment of numbers to the different parts of the model—and what I’ll call mathematization—the construction of mathematical relationships between different variables in the model.

Quantification allows you to make comparisons and determine preferences. Outcome A has a worse impact than Outcome B in terms of lives lost, impact on the economy, or whatever; therefore, we prefer option B to option A. In fact, it turns out that any method at all that determines a preference—at least a consistent set of preferences—implies a quantitative model. Which is another reason to build quantitative risk models: if we’re making decisions, we’re effectively making models—far better to do so intentionally.

Quantification also allows you to hold your model up to scientific scrutiny by comparing outcomes predicted by the model to observations and measurements. Without quantification, we cannot answer even very simple questions like “How far off was that prediction?” and “Is this prediction better than that?”—much less the much harder but incredibly important question, “Is this model good enough to examine the difference in outcome between these two options?”

So quantification allows comparisons and tougher scrutiny of models’ relationship to reality, but the real power of a mathematical model is not the model—it’s the modeling, what I call mathematization—the fact that in order to make quantitative relationships, the relationships within a model need to be mathematical. And in order to make a good model, those relationships need to be aligned in scope and granularity: between the decisions and objectives we’re trying to master, between each other, and with the data we will use to condition the model to tell us about the future.

Mathematics is a demanding taskmaster; it doesn’t put up with much nonsense. Mathematizing a model demands a clarity and logical consistency that—in my experience—is enormously valuable in itself, even before you get to the benefits of the quantification that follows.

The rigor might appear to be a restriction—doesn’t that just mean I can’t build a mathematical model unless I really, really understand the math and all the demands it makes? Not at all. In practice, it’s the other way around. Understanding the mathematics is not a condition for clarity. Clarity is a condition for understanding the mathematics. The process of building a model itself clarifies the relations, forces us to make explicit what we otherwise only implicitly assume. It makes it absolutely clear what it is we’re trying to achieve and what the levers are we can pull to try and achieve it.

Perhaps surprisingly, none of these benefits requires a very deep knowledge of mathematics. All we’re doing here is rigorous refinement of the basic logic of building a model. To draw on an analogy from last week’s article, it’s just thinking. Slow thinking. Thinking on steroids. But it’s just thinking. This kind of logic can be learned by anyone because it’s the kind of thinking we want to be doing all the time.

“All models are wrong. Some are useful.”

I always felt George Box’s famous quote, while not wrong, is misleading in that it gives the impression that the point of a model is to be right and that it’s more or less an accident if they turn out to be useful. The point of a model is not to be right—it’s to be useful.

Specifically, the point of a risk model is to systematize your understanding of the uncertainties influencing your objectives in a way that allows you to leverage data to make better, more informed decisions. Sometimes, in stable contexts that don’t change over the timescales over which you deploy the model and in which the past resembles—statistically at least—the future, this is down to the model. Always, though, it’s down to the modeling.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong