Italy Presses AI Firms to Put “Hallucination” Risks Front & Center

Italy Presses AI Firms to Put “Hallucination” Risks Front & Center

By
Key Takeaways
  • Transparency Over Enforcement: The Italian Competition Authority closed three investigations without finding an infringement, instead securing commitments under Article 27(7) of the Consumer Code.
  • Hallucination Risk Disclosures: DeepSeek, Mistral AI, and NOVA AI must introduce clear warnings that AI-generated content may be inaccurate or misleading.
  • User Interface Changes: Permanent disclaimers have been added beneath chat windows, with additional hyperlinks explaining hallucination risks.
  • Pre-Contractual Clarity: Companies expanded pre-registration and pre-purchase information to explicitly state that outputs may not be reliable and should be verified.
  • Company-Specific Measures: DeepSeek committed to investing in technology to reduce hallucinations, while NOVA AI must clarify that it acts only as an interface for multiple chatbots and does not process their responses.
Deep Dive

Italy’s antitrust authority has quietly drawn a line under three investigations into leading artificial intelligence providers, using the cases to tell users, clearly and upfront, when the technology might be wrong.

The Italian Competition Authority said it has secured commitments from Hangzhou DeepSeek Artificial Intelligence, Mistral AI, and Scaleup Yazilim Hizmetleri Anonim Şirketi, which operates the NOVA AI chatbot platform. The investigations centered on the risk of “hallucinations,” where generative AI systems produce inaccurate or misleading information.

Rather than finding a breach of the law, the authority closed the proceedings under Article 27(7) of Italy’s Consumer Code after the companies agreed to a series of measures designed to improve transparency. The focus is less on punishing past conduct and more on reshaping how these tools present themselves to users at the point of interaction.

Across websites and apps, the companies will now be required to make the limits of their systems harder to miss. Permanent disclaimers have been introduced directly beneath chat windows, warning users in Italian about the possibility of hallucinations and linking to additional explanations. The changes extend beyond the interface itself, reaching into the steps that precede registration or purchase, where pre-contractual information has been expanded to include clear warnings that AI-generated content may not always be reliable and should be verified.

The authority’s intervention reflects a broader shift in how consumer protection rules are being applied to generative AI. Tools that can mimic confidence and fluency, even when wrong, present a different kind of risk, one that sits uncomfortably within traditional frameworks for unfair commercial practices.

In DeepSeek’s case, the commitments go further. The company agreed to invest in technology aimed at reducing hallucinations, while acknowledging that current systems cannot eliminate the problem entirely.

For NOVA AI, the emphasis is on clarity about what the service actually does. The company committed to making it explicit that its platform functions as a single interface for accessing multiple chatbots, each described separately, and that it does not aggregate or process their responses.

The outcomes seem to suggest regulators are less interested in debating whether hallucinations can be solved and more focused on ensuring users understand that they exist. It is a subtle shift, but an important one, as authorities begin to test how far existing consumer protection rules can stretch to accommodate the realities of generative AI.

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong