AI Governance

EU Moves to Spell Out Google’s DMA Duties on Android AI Access & Search Data Sharing

The European Commission has opened two formal proceedings aimed at clarifying how Google must meet its obligations under the Digital Markets Act, sharpening the focus on how the company runs Android and Google Search at a moment when AI is rapidly reshaping both.

EU Data Protection Authorities Caution Against Cutting Corners as AI Act Is Streamlined

The European Data Protection Board and the European Data Protection Supervisor responded to the European Commission’s proposed “Digital Omnibus on AI,” a package designed to simplify aspects of the implementation of the AI Act. The goal, according to the Commission, is to make the rules easier to apply in practice. The message from Europe’s privacy watchdogs is that simplification is welcome but only up to a point.

AI Operational Risk Across the ML Lifecycle

Managing risks across the AI/ML lifecycle is critical for building reliable, secure, and ethical models. From data collection and labeling to training, fine-tuning, and evaluation, each stage presents unique challenges that can affect performance, reproducibility, fairness, and safety. Implementing well-defined controls ensures models are trustworthy, auditable, and resilient to both technical and operational issues. 

Italian Antitrust Authority Orders Meta to Halt WhatsApp AI Terms That Shut Out Rival Chatbots

Italy’s competition watchdog has ordered Meta to immediately suspend new WhatsApp business terms after concluding that they risk unlawfully excluding competing AI chatbot services from one of the world’s most widely used messaging platforms.

FTC Vacates Rytr AI Order, Signaling Shift Under Trump Administration’s AI Policy

The Federal Trade Commission has moved to reopen and set aside a 2024 final consent order against AI writing company Rytr, concluding that the original enforcement action failed to meet the legal standards of the FTC Act and imposed unnecessary constraints on artificial intelligence innovation.

New York Moves to Rein In Frontier AI With Transparency & Reporting Rules

‍On December 22, Kathy Hochul signed the Responsible AI Safety and Evaluation Act, or RAISE Act, setting what state leaders describe as a nation-leading standard for transparency and accountability among developers of so-called frontier AI models. The legislation requires large AI developers to publicly document their safety practices and to notify the state within 72 hours when serious harm linked to their systems is identified.

Korea’s Privacy Regulator Pivots Toward Prevention as AI Reshapes the Data Landscape

The Personal Information Protection Commission (PIPC) recently unveiled its policy directions for 2026, laying out a sweeping plan to move Korea’s privacy regime away from after-the-fact penalties and toward a more preventive, risk-based approach designed for an AI-embedded society. The roadmap was presented on December 2 at the Sejong Convention Center during a joint reporting session with the Ministry of Science and ICT, the Korea Aerospace Administration, and the Korea Media and Communications Commission.