Sweden Positions Its Financial Watchdog at the Center of AI Supervision
Key Takeaways
- FI Endorses Expanded AI Oversight: Finansinspektionen supports being designated as market surveillance authority for high-risk AI systems in the financial sector under Sweden’s adaptation of the EU AI Act.
- AI Supervision Embedded in Financial Regulation: Oversight of certain Annex III high-risk AI systems would sit within FI’s existing supervisory perimeter, reducing fragmentation and aligning AI compliance with financial rules.
- Innovation Role Confirmed: FI backs shared responsibility for AI regulatory sandboxes and expanded authority over real-world testing of high-risk AI systems, while maintaining that the government should set fee regulations.
- Enforcement Tools Supported With Clarification Sought: FI agrees national authorities should be able to issue remarks, fines, and injunctions, but requests clarity on how new AI enforcement powers and secrecy provisions interact with existing financial supervisory law.
Deep Dive
Sweden’s financial watchdog is preparing for a larger role in the age of AI. In its formal response to the government inquiry Adaptations to the AI Regulation, Finansinspektionen (FI) showed clear support for taking on responsibility as the market surveillance authority for the financial sector under the EU AI Act. It also backed proposals that would give it a defined role in innovation-promoting measures, including AI regulatory sandboxes.
But while the authority’s tone is broadly supportive, its message to lawmakers is equally clear. If AI oversight is to sit within financial supervision, the legal wiring needs to be clean, coherent, and free of unintended overlap.
At the heart of the proposal is a relatively straightforward idea. Rather than building an entirely new supervisory layer for AI in finance, Sweden would embed oversight within its existing financial regulatory structure. And the FI welcomed that approach.
The inquiry proposes that FI be given sector-specific responsibility for market control of certain high-risk AI systems referenced in Annex III, points 5(b) and 5(c), of the EU AI Act, where those systems fall within FI’s supervisory perimeter. For firms already under financial supervision, FI argues, this could simplify compliance. The AI Act and financial services laws overlap in places. Having one authority oversee both may reduce fragmentation and duplicated processes.
FI also noted that similar mandates are expected to be given to financial supervisory authorities across the EU, improving the prospects for harmonized market control in the Union’s financial sector.
That said, the authority acknowledged that it would be assuming a comparatively large responsibility and supported the inquiry’s assessment that additional funding will be required to meet the expanded mandate.
Sandboxes, Real-World Testing, and Innovation
The response is not just about enforcement.
FI also endorsed the proposed division of responsibility around AI regulatory sandboxes, welcoming a cooperative model in which sector authorities share responsibility rather than centralizing everything in a single body. In its view, that shared approach is likely to make innovation support more effective.
The authority further supported the proposal that it be given expanded responsibility for supervising the testing of high-risk AI systems under real-world conditions, along with the ability to issue additional regulations governing such testing. Given the technical and domain-specific complexity of financial AI use cases, FI described that arrangement as appropriate.
However, it drew a line on one issue. While it would administer testing oversight, it argued that the government, not FI, should remain responsible for setting fee regulations, consistent with the existing ordinance governing fees for matters handled by the authority.
Secrecy Rules
One of FI’s most pointed interventions concerns confidentiality. Under the proposal, certain information would be protected by secrecy pursuant to Chapter 30, Section 23 of Sweden’s Public Access to Information and Secrecy Act (OSL), supplemented by a new provision in Section 9 of the Public Access to Information and Secrecy Ordinance (OSF).
FI questioned whether that new structure risks overlapping with the existing absolute secrecy rule in Chapter 30, Section 24 of the OSL, which it believes already covers the relevant categories of information.
If both provisions apply to the same data, a form of double regulation could arise. Under Swedish law, where multiple secrecy provisions compete, the strongest protection prevails. FI therefore asked lawmakers to clarify how the current and proposed rules are meant to interact before the legislation is finalized.
It is a technical point, but one that goes directly to legal certainty for firms submitting sensitive data in AI supervision processes.
When Is a “Remark” Appropriate?
A more nuanced but practical concern relates to the use of formal remarks as a supervisory measure.
The inquiry suggests that a remark should be used instead of an injunction when there is nothing left to remedy following a breach of the AI Act. FI cautioned that this wording could be interpreted to mean that a remark is only available once all identified deficiencies have been corrected.
In practice, under financial supervisory law, FI frequently issues remarks even where some deficiencies remain outstanding at the time of the decision.
A stricter interpretation, FI argued, could create an unintended incentive structure. A firm that does not voluntarily remedy deficiencies might end up in a better position than one that quickly corrects them and still receives a remark. FI therefore urged lawmakers to clarify that a remark may be issued even if certain deficiencies remain unaddressed at the time of the decision.
It noted that a similar position it raised during consultations on Sweden’s implementation of the EU regulation on markets in crypto-assets was ultimately reflected in government legislation.
Embedding AI Governance Inside the Financial Rulebook
The broader signal from FI’s response is one of institutional alignment rather than resistance.
Sweden appears poised to integrate AI oversight into its existing financial supervisory architecture rather than treating it as a parallel or detached regime. FI is not pushing back on that expansion. On the contrary, it is asking for the legal precision necessary to make it workable.
For financial institutions developing or deploying high-risk AI systems, that likely means AI compliance will be supervised by the same authority that already oversees capital, conduct, governance, and operational resilience.
The question now is less about whether FI will take on the role, it has made clear that it supports doing so, and more about how cleanly Sweden’s lawmakers can reconcile AI-specific powers with the deeply embedded structures of financial regulation.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

