The Black Swan Is a Red Herring

The Black Swan Is a Red Herring

By
Key Takeaways
  • Black Swan Misuse: The article argues that the concept of Black Swan events is often misapplied to excuse failures in preparedness and risk management, a phenomenon described as “Black Swan Washing.”
  • Limits of Traditional Models: It highlights significant weaknesses in conventional risk modeling approaches, including overreliance on Gaussian distributions, deterministic assumptions, and simplistic representations of uncertainty.
  • Narrative Fallacy and Biases: The piece explores how cognitive biases, confirmation fallacy, and retrospective storytelling distort how organizations and individuals interpret risk and unpredictability.
  • Need for Broader Risk Paradigms: Rather than abandoning modeling altogether, the article advocates for more adaptive, uncertainty-aware, and interdisciplinary approaches to risk management and forecasting.
  • Resilience Over Defeatism: The article concludes that Black Swan events should motivate organizations to strengthen resilience, challenge assumptions, and improve forecasting practices instead of embracing fatalism.
Deep Dive

In this article, Graeme Keith explores the enduring influence of Nassim Nicholas Taleb’s Black Swan theory and the growing tendency to use unpredictable events as a catch-all explanation for failures in risk management and preparedness. Examining the limitations of traditional modeling frameworks, the dangers of retrospective narrative-building, and the cognitive biases that shape how organizations interpret uncertainty, Keith argues that the real lesson of Black Swan events is not that forecasting is futile, but that current approaches to modeling risk remain fundamentally inadequate for the complexity of the modern world.

Why the Black Swan Debate Has Become a Dangerous Distraction in Modern Risk Management

According to Nassim Nicholas Taleb, the author of the highly influential book “The Black Swan”, a Black Swan event is characterized as follows:

First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme ‘impact’. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.
I stop and summarize the triplet: rarity, extreme ‘impact’, and retrospective (though not prospective) predictability. A small number of Black Swans explains almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives.

Black Swans are defined as much by features of our discourse about events as by the events themselves. This is particularly true of the third member of this triplet, which is the source of much of the misuse of the black swan concept: retrospective, but not prospective, predictability. Much hinges on whether we believe our inability to model Black Swan events is because they are inherently outside the scope of any modelling or because we are just rather limited in how we think about modelling.

Many authors and pundits take it to mean the former. Black Swans are categorically unpredictable, and our retrospective rationalization is delusional. As such, and given that “almost everything in our world” is explained by “a small number of Black Swans”, they argue we may as well throw in the towel on modelling altogether. This is leads to what Dr Jimi M.V. Hinchliffe magnificently calls "Black Swan Washing"—where "risks that should have been predicted and prepared for through prudent risk management and resilience, such as the Covid 19 pandemic, are labelled inherently unpredictable ‘black swan’ events to abrogate the firm, or government, or agency of accountability for its failure to prepare and manage the crystallized risk."

If, on the other hand, modeling naivete is all that prevents the successful anticipation and management of Black Swan events, then the occurrence of Black Swans is a roaring wake-up call to learn from the predictive failures of the past and do everything we can to improve our forecasting of the future.

Black Swan Hegemony

The argument that Black Swans call for a complete capitulation of our modeling aspirations hangs on the belief that Black Swans are exclusively responsible for everything interesting that happens in the world.

Ironically, this belief in the ascendancy of Black Swan events is itself the product of the same narrative fallacy that Taleb persuasively argues is responsible for deluding us into believing we can explain past Black Swans, and thereby model them in the future.

Taleb describes how we retrospectively look back at events and construct inevitable narratives of causal contingency by which such events were bound to transpire. As Taleb writes

…narrativity causes us to see past events as more predictable, more expected, and less random than they actually were

But this is exactly what we are doing when we look back at the complex unfolding of history, with its dense, tangled webs of influence, and ascribe endogenous bifurcations in its macroscopic trends to the exclusive agency of a handful of exogenous epoch-making Black Swan events. In the same way as we post-rationalize narratives that entail the inevitability of black swan events, we post rationalize narratives that credit them with unique causal influence. We are at least as deluded about the significance of Black Swan events as we are about our ability (or otherwise) to explain and predict them.

The poverty of our modelling paradigms

To claim as illusory the belief that Black Swan events explain almost everything is in no way to claim they aren’t important, nor does it defend our persistent inability to model them. I argue, however, that it does bring within reach the aspiration of using models to leverage our intelligence and experience to anticipate, understand and manage the critical risks we face. And it intensifies the urgency with which we must address the shortcomings of our current modelling paradigms. This is the most valuable legacy of the Black Swan concept.

Taleb himself is articulate and exhaustive in his enumeration of the litany of inadequacies in our current modelling practice. They fall broadly into two categories

Model Inadequacies
  • The inadequacy of Gaussian models, models based on Gaussian distributions (hereunder Brownian motion models including Black-Scholes), mean / variance models and variance as a solo risk metric.
  • The inadequacy of archetypical models of uncertainty (coins, dice, card games) to capture characteristics of real-life uncertainty (ludic fallacy).
  • The dominance of deterministic models and the blind faith with which we believe in them to the point of not even checking to see if they work.
  • The failure adequately to account for structural features of the dynamics of complex systems like bifurcations to instability and chaotic behaviour
  • The failure to situate models in their broader contextual trends and understand their relationship to the assumptions in our simple models.
Epistemological Inadequacies and Biases
  • We carefully select what we do model by what we can model, and we generalize from that to what we can’t.
  • We confirm our models with data they were built to explain and ignore everything that speaks against them (confirmation fallacy)
  • We find spurious patterns in randomness
  • We attribute causality to contingency (narrative fallacy)

If we are to achieve our modelling aspirations we must learn to explicate what our models can and can not reasonably tell us. We must examine and test our modelling assumptions, recognize their limitations and appreciate the trade-off we make between faithful representation and computability. And we must test the predictions of our models against outcomes.

We must build uncertainty into our models and make uncertainty part of the process by which we use models to make decisions. In that process, we must align our categories of response, intervention and control with what we can reasonably know and what we can not.

And where the underlying dynamics of the systems we interrogate entirely preclude prediction, we must broaden our notion of modeling: Learn to utilize the qualitative and relational insights our models do give us, look to our vulnerabilities and broaden the categories of both the threats we consider and our potential responses to them.

Finally, in our humility about what we can learn from our models, we must connect with other disciplines to bring broader perspectives of understanding to bear and understand how they align with our modeling insights.

Conclusion

These are the discussions we should be having. Instead, we seem endlessly stuck discussing whether such and such an event was or was not a Black Swan and whether the existence of Black Swan events precludes our ability to model them, as if they weren’t defined by our ability to model them in the first place.

It is in this sense that I claim the Black Swan is a red herring. The book, the concept and the ensuing debate have been enormously valuable in broadening public awareness of the mathematical and epistemological shortcomings in our modeling and the consequent need for new, broader paradigms to meet these challenges. But Black Swans are neither as omnipotent or as inscrutable as they are often portrayed and the concept has become a distraction. It deludes us into defeatism, instead of animating us to do better.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong