Spain’s Data Watchdog Turns to Deepfakes in New Push for Responsible AI Use

Spain’s Data Watchdog Turns to Deepfakes in New Push for Responsible AI Use

By
Key Takeaways
  • Awareness Campaign: Spain’s data protection authority has launched a public initiative warning about the risks of deepfakes and AI-generated content.
  • Consent Emphasis: The regulator underscores that using someone’s likeness without consent carries legal and ethical implications.
  • Verification Warning: Citizens are urged to question and verify suspicious content before sharing.
  • Broader AI Guidance: The initiative builds on prior AEPD materials addressing risks tied to AI and personal data use.
Deep Dive

The Spanish Data Protection Agency has unveiled a new initiative titled “Deepfakes are no joke,” anchored by an educational video designed to show just how easily AI-generated content can blur the line between reality and fabrication. The video walks viewers through a simulated scenario in which a seemingly authentic audiovisual clip is created from a single photograph, before revealing that the content is entirely artificial and produced with the subject’s consent.

The message is straightforward, even if the technology behind it is not. Deepfakes, powered by increasingly sophisticated AI models, can convincingly replicate a person’s face, voice, and mannerisms. While the technology has legitimate uses across entertainment, accessibility, and innovation, regulators are increasingly focused on its misuse, particularly where it enables impersonation, reputational harm, or the spread of disinformation.

Lorenzo Cotino, president of the Spanish Data Protection Agency, framed the initiative as both a warning and a call to action. “Artificial intelligence is a tool that can contribute to social progress, but its use must be accompanied by information and responsibility,” he said. “Manipulating images of third parties with AI is not neutral, even in seemingly trivial contexts, and requires rigorous evaluation. This video is an invitation to reflect and act prudently in the digital environment.”

The campaign goes beyond awareness and into practical guidance. The agency is urging individuals to better understand how AI systems operate and to consider the legal and personal consequences of creating or sharing manipulated content. It emphasizes the need to obtain explicit consent before using someone’s image or personal data and highlights the importance of verifying content before amplifying it, particularly in an environment where synthetic media can spread quickly.

This latest initiative is not an isolated effort. It builds on a broader body of guidance the agency has released in recent months addressing AI-related privacy risks. These include materials examining the implications of using third-party images in AI systems and broader advice on engaging with AI tools safely and responsibly.

Regulators are not only shaping rules around AI but are increasingly investing in public education to address behavioral risks that formal regulation alone cannot fully mitigate. As deepfake capabilities continue to advance, the challenge is no longer just technical or legal, but cultural, ensuring that users understand both the power and the consequences of the tools at their fingertips.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong