Singapore Sets Out New Guidelines to Strengthen AI Risk Management in Financial Sector
Key Takeaways
- Sector-Wide Guidance MAS has released proposed Guidelines on AI Risk Management outlining supervisory expectations for all financial institutions using AI.
- Proportionate Requirements The Guidelines are designed to scale based on each firm’s size, AI use cases, and risk profile, covering technologies including Generative AI and emerging AI agents.
- Lifecycle Controls Financial institutions will be expected to apply controls across the AI lifecycle, including data governance, fairness, transparency, explainability, human oversight, and third-party risk.
- Governance Accountability Boards and senior management are responsible for establishing AI risk management frameworks, policies, and organizational culture to support responsible AI adoption.
- Building on Prior Supervisory Work The Guidelines draw on MAS’s 2024 thematic review of banks’ AI use and ongoing engagement with industry stakeholders.
Deep Dive
Singapore’s financial watchdog is moving to tighten oversight of artificial intelligence across the financial sector, issuing a new consultation paper that lays out supervisory expectations for how firms should manage the risks of increasingly powerful AI systems.
The Monetary Authority of Singapore (MAS) on Thursday released proposed Guidelines on AI Risk Management, a sector-wide framework designed to help financial institutions use AI responsibly while maintaining strong governance, lifecycle controls, and operational safeguards. The consultation marks a significant next step in Singapore’s push to balance innovation with sound risk management as AI technologies, particularly Generative AI and emerging autonomous AI agents, become more embedded in financial operations.
According to MAS, the Guidelines will apply to all financial institutions and set out expectations across three main areas: governance and oversight, key risk management systems and processes, and controls across the entire AI lifecycle. While the Guidelines are intended to be broad enough to support a wide range of use cases, MAS emphasizes that implementation should be proportionate, aligned with the size, nature, and risk profile of each firm’s AI usage.
Clear Expectations for Governance and Oversight
MAS underscores that boards and senior management are responsible for driving effective AI governance. That includes implementing frameworks, structures, policies, and processes that anchor AI risk management within the institution, as well as ensuring the right risk culture is in place as AI adoption expands.
Firms will be expected to maintain accurate and up-to-date AI inventories, establish clear identification processes for AI use across the organization, and conduct risk materiality assessments that take into account impact, complexity, and reliance. These mechanisms are intended to provide better visibility into how AI is deployed and where risks are most concentrated.
The proposed framework also calls for robust controls spanning data management, fairness, transparency, explainability, human oversight, evaluation and testing, monitoring, change management, and third-party risk. MAS notes that these controls should be applied based on their relevance and in proportion to the assessed risk level of each AI application. Institutions will also need to ensure they have the appropriate capabilities and capacity to support the scale and sophistication of their AI deployments.
Building on Earlier Supervisory Work
MAS said the Guidelines build on its 2024 supervisory thematic review of banks’ AI use and continued engagement with industry players. The goal is to support responsible innovation while ensuring guardrails are in place.
“The proposed Guidelines on AI Risk Management provide financial institutions with clear supervisory expectations to support them in leveraging AI in their operations,” said Deputy Managing Director Ho Hern Shin. “These proportionate, risk-based guidelines enable responsible innovation by financial institutions that implement the relevant safeguards to address key AI-related risks.”
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

