New AI Privacy Guidelines Put Human-Centered Development at the Core of the Conversation

New AI Privacy Guidelines Put Human-Centered Development at the Core of the Conversation

By
Key Takeaways
  • New AI Privacy Guidance Issued: The Agency has published Personal Data Protection and Artificial Intelligence guidelines for organizations developing, deploying, or using AI systems.
  • GDPR and AI Governance Clarified: The guidance addresses when GDPR applies to AI models, how to define supplier and user roles, and how to establish lawful bases for processing personal data.
  • Focus on Emerging AI Risks: Special emphasis is placed on risks tied to large language models (LLMs), model anonymity, attacks on AI systems, and privacy-enhancing technologies.
  • DPIAs and Privacy by Design Highlighted: The guidelines outline when organizations should conduct Data Protection Impact Assessments (DPIAs) and how data protection by design principles apply to AI systems.
  • Privacy Positioned as a Foundation for Responsible AI: The Agency stressed that personal data protection is not an obstacle to AI innovation, but a prerequisite for lawful, responsible, and human-centered AI development.
Deep Dive

As organizations continue rushing to integrate artificial intelligence into everything from internal operations to customer-facing services, regulators are trying to address how to move fast without leaving privacy protections behind.

The Croatian data protection authority's newly released guidance on Personal Data Protection and Artificial Intelligence steps directly into that debate, offering organizations a practical framework for navigating the privacy and governance challenges that are surfacing alongside rapid AI adoption.

Published on May 7, the guidelines are aimed at entities that develop, train, test, integrate, or use artificial intelligence systems. But rather than reading like a purely theoretical policy document, the guidance focuses heavily on the operational questions organizations are already running into as AI projects move from experimentation into real-world deployment.

The document walks through a range of issues that have quickly become central to AI governance discussions across industries. Among them are when the General Data Protection Regulation applies to an AI model or system, how organizations should determine the respective roles of suppliers and users, and how to establish the purpose and legal basis for processing personal data within AI environments.

The guidance also addresses what “data protection by design” means in practice for AI systems, a concept that has become increasingly important as organizations attempt to embed privacy safeguards earlier in development cycles rather than retrofitting controls after deployment.

Questions around transparency and individual rights are also a major focus.

The Agency outlines how organizations should inform data subjects about the use of AI systems and how they can enable individuals to exercise their rights when personal data is involved in automated or AI-driven processing activities. The guidelines further examine when organizations should conduct Data Protection Impact Assessments (DPIAs), an area that continues to draw heightened attention as AI systems become more complex and data-intensive.

Notably, the guidance dedicates significant attention to some of the more technically challenging and rapidly evolving risks surrounding artificial intelligence.

That includes issues tied to model anonymity, risks associated with large language models (LLMs), attacks targeting AI models and systems, and the role privacy-enhancing technologies can play in reducing exposure to personal data risks.

The emphasis on model and system attacks reflects growing concern across both privacy and cybersecurity communities that AI governance can no longer be treated solely as a compliance exercise. As organizations deploy increasingly sophisticated AI tools, concerns around security, resilience, misuse, and unintended data exposure are becoming tightly intertwined.

At the center of the Agency’s message, however, is a broader philosophical point that has become increasingly common in global AI governance discussions.

According to the guidance, personal data protection should not be viewed as an obstacle to innovation or AI development. Instead, the Agency frames privacy protections as a necessary foundation for building and deploying artificial intelligence systems in a way that is lawful, responsible, and human-centered.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong