AI Governance

Global Regulators Draw a Line on AI Deepfakes as Privacy Risks Escalate

In a rare show of coordination, 61 data protection authorities from across the globe have issued a joint statement warning that AI systems capable of generating realistic images and videos of real people are creating a fast-moving and deeply personal category of risk. The concern is not just about misinformation or synthetic media in the abstract. It is about harm that lands squarely on individuals, often without their knowledge and increasingly without meaningful recourse.

Spain’s Data Watchdog Turns to Deepfakes in New Push for Responsible AI Use

The Spanish Data Protection Agency has unveiled a new initiative titled “Deepfakes are no joke,” anchored by an educational video designed to show just how easily AI-generated content can blur the line between reality and fabrication. The video walks viewers through a simulated scenario in which a seemingly authentic audiovisual clip is created from a single photograph, before revealing that the content is entirely artificial and produced with the subject’s consent.

EU Parliament Moves to Rein in AI Training on Copyrighted Content

The European Parliament has voted overwhelmingly to strengthen protections for copyrighted works used in artificial intelligence systems, signaling growing concern among lawmakers that generative AI is reshaping the economics of creative industries without clear rules for compensation or consent.

EU Lawmakers Move to Tighten Copyright Protections as AI Training Comes Under Scrutiny

On Tuesday, the European Parliament is expected to outline a set of principles aimed at strengthening copyright protections as generative AI systems increasingly rely on vast troves of creative content for training. The discussion comes as policymakers grapple with concerns from publishers, artists, and media organizations that their work is being absorbed into AI models without clear acknowledgment or compensation.

Global Privacy Regulators Rally Around New Principles for AI Image Generation Tools

Privacy regulators from around the world are stepping up scrutiny of generative AI tools capable of producing realistic images and videos of real people, warning that the technology is already being used in ways that threaten privacy, dignity, and safety.

Sweden Positions Its Financial Watchdog at the Center of AI Supervision

Sweden’s financial watchdog is preparing for a larger role in the age of AI. In its formal response to the government inquiry Adaptations to the AI Regulation, Finansinspektionen (FI) showed clear support for taking on responsibility as the market surveillance authority for the financial sector under the EU AI Act. It also backed proposals that would give it a defined role in innovation-promoting measures, including AI regulatory sandboxes.

The Shadow AI Crisis: Why Enterprise Governance Is Failing & How to Fix It

Almost half of all GenAI use now occurs through personal accounts like ChatGPT, Claude, Perplexity, and others, entirely outside corporate oversight or control. This isn’t about a few rogue users acting in secret. We’re seeing widespread bypassing of approved tools across entire organizations, with the average company experiencing 223 shadow AI incidents each month, twice as many as just a year ago.