Dutch Regulator Warns Generative AI Risks Becoming a ‘Wild West’ Without Shared Values
Key Takeaways
- Generative AI Is Now Mainstream: Nearly one in four people in the Netherlands uses AI tools, making their societal impact immediate and unavoidable.
- Speed Is Outpacing Governance: Organizations are deploying generative AI faster than they are assessing its risks to individuals and society.
- Unchecked Growth Carries Real Harm: From non-consensual imagery to AI becoming a primary information source, risks are escalating alongside capabilities.
- Values Must Lead, Not Follow: The regulator warns that prioritizing speed over shared norms risks turning AI adoption into an uncontrolled societal experiment.
- Lawful Deployment Is Essential: Generative AI must comply with GDPR and EU AI rules throughout both training and deployment.
Deep Dive
The rapid spread of generative artificial intelligence is testing Europe’s ability to keep innovation aligned with democratic values, according to a new vision published Tuesday by the Dutch Data Protection Authority.
The watchdog says generative AI is no longer a future concern but a present-day force shaping how people learn, communicate, and make decisions. With new tools entering the market at breakneck speed, many of them developed by US-based technology companies, the regulator warns that Europe’s digital autonomy is increasingly under strain, particularly against the backdrop of geopolitical uncertainty and economic pressure.
That impact is already visible in daily life. Data from Statistics Netherlands shows that 23 percent of the Dutch population now uses AI applications such as ChatGPT, with usage markedly higher among younger people. As these tools move deeper into education, healthcare, media, and government, the authority argues, the consequences of how they are designed and deployed are no longer abstract.
Innovation Without Guardrails
The regulator’s concern is not with generative AI itself, but with how quickly it is being rolled out, often without sufficient reflection on the risks. It points to applications that can generate non-consensual sexual imagery, as well as chatbots that present themselves as sources of mental health support. At the same time, more users are turning to AI systems as their primary or even sole source of information.
According to the authority, organizations are frequently moving faster than their governance frameworks allow, deploying generative AI without fully considering the implications for individuals or society.
“This presents new and complex challenges in terms of control,” said Aleid Wolfsen, chair of the Dutch Data Protection Authority. “Generative AI offers enormous opportunities, but we must deploy the technology carefully. Innovation is welcome, but it must be equipped with strong guardrails.”
Those guardrails, the regulator stressed, already exist in law. AI models must be trained and deployed in line with the General Data Protection Regulation and the EU’s AI Regulation, not treated as experimental systems operating outside established legal and ethical boundaries.
Three Futures Europe Should Avoid
Those risks are already translating into regulatory action elsewhere in Europe. In the UK and France, watchdogs and prosecutors have launched parallel investigations into X and its Grok chatbot following reports that the system was used to generate non-consensual sexualised images, including content involving children. UK regulators are examining the issue through both data protection and online safety law, while French authorities have escalated matters with a criminal cyber-crime investigation that included a raid on X’s Paris offices and the summoning of senior executives to hearings later this year. The cases underscore how rapidly deployed generative AI systems are exposing gaps between emerging harms and the fragmented legal frameworks now being used to address them.
Similar warnings are emerging from financial regulators as AI moves from experimentation into core operations. In France, the Autorité des Marchés Financiers has reported that the vast majority of financial market participants are already using AI or plan to do so imminently, with many systems live in production environments.
While adoption is accelerating, the regulator has flagged governance, not technology, as the primary risk, citing data protection, accountability, over-reliance on automation, and growing dependence on a small number of non-European AI providers. The findings reinforce the Dutch watchdog’s concern that AI is being embedded across critical sectors faster than the controls, oversight structures, and shared norms needed to govern it responsibly.
In its vision, the authority sketches out three futures it believes Europe must steer clear of. One is a “Wild West” scenario, where generative AI grows unchecked and without clear rules. Another is a paralysis scenario, in which overly complex or unclear regulation slows innovation to a crawl. The third is an overly defensive “bunker” approach, where fear of risk stifles technological progress altogether.
The regulator argues that none of these paths is inevitable. Generative AI, it says, is still manageable, but only if values, rather than speed, become the starting point.
“When a technology is rapidly integrated into education, healthcare, media, and government without a shared normative framework, it doesn’t create an innovation ecosystem,” Wolfsen said. “It creates a societal experiment without protocol. That’s why we have to make choices now.”
The authority is instead calling for a fourth path that balances room for innovation with clear protections for democracy, fundamental rights, and public trust. It also warned against allowing the development and deployment of generative AI to become concentrated in the hands of a small number of dominant players.
Transparency, risk awareness, and respect for fundamental rights must be built into AI systems from the outset, not added after problems emerge.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

