Imagine an AI-Enabled World of Risk Management
Key Takeaways
- Risk language evolves: Traditional terms like “risk” and frameworks such as COSO ERM and ISO 31000 are replaced by plain-language concepts like “what might happen” and “how will we achieve our objective.”
- AI as decision partner: Decision-makers rely on AI agents to analyze probabilities, assess potential impacts, and recommend optimal courses of action in real time.
- Cross-functional AI integration: AIs coordinate across departments, factoring in supply chain timelines, quality control, cybersecurity readiness, and customer satisfaction metrics.
- Risk and audit roles adapt: Practitioners shift to overseeing AI systems, ensuring data integrity, monitoring for AI “hallucinations,” and providing assurance on both risks and opportunities.
- Governance focus: Internal audit leaders engage directly with CEOs on AI governance, change management, and workforce impacts, maintaining a seat at the table for strategic discussions.
Deep Dive
In the latest piece from Norman Marks, the veteran governance, risk, and audit thought leader takes a bold leap into the near future, imagining how AI could fundamentally reshape decision-making, risk management, and the role of internal audit. Through a vivid crystal-ball scenario, Marks explores what happens when AI becomes a trusted partner for executives, operations, and assurance functions alike.
A Glimpse Into an AI-Driven Risk Landscape
I am going to look into my AI-enabled crystal ball and imagine the world of the future (the not-too-distant future) decision-maker. Then I will look again to see what the risk practitioner and the internal auditor are doing in this new world.
The first thing I notice is that the word “risk” has been removed from the business vocabulary. COSO ERM and ISO 31000 are extinct.
Instead, we are in a world where people talk about “what might happen” and “how will we achieve our objective.”
When a decision is needed, our intrepid decision-maker, Alex, asks his faithful AI, George, a series of questions.
Alex: “George, when do we need to purchase new supplies?”
George: “You have sufficient supplies for three months’ usage based on historical averages and the forecast I prepared for you last week. Do you want me to update that forecast?”
Alex: “Yes.”
George: “Your forecast usage has not changed.”
Alex: “OK. So when do we need to purchase new supplies, given lead times?”
George: “If you place an order for the normal quantity today, there is an 85% probability that you will receive the supplies a week before they are needed in manufacturing, a 10% likelihood that they will arrive within a day of needing them, and a 5% likelihood that they will be late. Here is more information: There is a 3.5% likelihood that they will be more than one day late. This is all assuming that you use the same supplier as last time.”
Alex: “What will happen if they arrive one day before they are needed?”
George: “There will be insufficient time for a careful inspection of their quality. There is a 10% likelihood, based on past inspections, that imperfect materials will move into production. In that case, the likelihood that the defects will be detected when the finished goods are inspected is 90%, leaving a 10% likelihood that defective products will be sent to customers. The resulting impact on customer satisfaction and the loss of revenue is outside the acceptable parameters you have established.”
Alex: “What can and should we do?”
George: “I have examined the options, and the best is to spread your orders over the next week. I have exchanged information with the supplier’s AI, and they can schedule their production of the materials so there is a 98% chance of receiving what you will need for each day’s production at least five working days in advance. That will drop the potential effects on customer satisfaction and revenue to desired levels.”
Alex: “George, place those orders and keep me updated.”
Meanwhile, Tom, the COO, is talking to George as well. AIs apparently can multi-task.
Tom: “George, are we ready to announce the availability of our new AI-enabled clocks?”
George: “Tom, thank you for asking. Give me a second to check the status of the various teams. Okay. Marketing is reporting that they are ready, but Manufacturing says they are 90% ready and would prefer another week. If we announce today, there is a 10% chance of a quality defect not being detected until final QA when the product is ready for shipment, and a 5% chance that it will not be detected before shipment. That is outside the acceptable parameters you have established. The Technology team needs another ten days before they can meet established parameters for cyber defences. If we announce today, there is a 5% chance that the product will ship with significant vulnerabilities. If they are exploited after sale, the reputational and revenue damage will be outside acceptable parameters.”
Tom: “Got it. What are the options, George, and how do they compare? Which is best and why?”
We can leave Tom to work with George on the decision.
Where are the risk and audit practitioners? Let’s have a look.
Ah, there is Michael, who used to have the title of Chief Risk Officer. My view doesn’t let me see what his title is now. He is watching multiple screens. A few show his own AI monitoring the various AI agents (especially George) used by the business for quality operations, and checking the integrity (completeness, accuracy, validity, and security) of the data they use. He has established alerts for potential AI hallucinations (where the AI makes up data when it can’t find what is needed). He also has an AI monitoring sources of internal and external information, connecting with other AI agents around the world to notify him of changes in the business or the world in which it operates that need to be shared with and acted upon by senior management and their AIs.
Now I can see Maria, the company’s Senior Vice President and Chief Audit Executive. She is talking to the CEO about the continuing AI implementation program and, in particular, the related change management initiative. She is asking the CEO about the company’s plans for the displaced employees. In front of her is her report to the CEO on the AI project’s risk status. It’s clearly a collaborative and constructive conversation. Now they have switched to the success of the AI governance program.
It’s good to see the CAE being treated as a valued and trusted advisor by the CEO. She has a major role providing assurance to the board and top management about the risks and opportunities that matter to them. She and the CEO recognize that changes are happening within and outside the business that need the attention of the CAE and her team.
But wait! The AI in the crystal ball is telling me that while this is the most likely scenario, it needs to show me others and their likelihood.
Tell me what you think of this vision while I check out other scenarios. There is still time to make changes that will affect the future world we will live in.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.