This Risk Is Scary

This Risk Is Scary

By
Key Takeaways
  • AI Adoption Is Accelerating: Nearly half of in-house legal departments are now using generative AI tools, a sharp rise from the previous year, highlighting growing trust and pressure to adopt.
  • Opportunities Are Real and So Are the Risks: While AI can drive major gains in contract drafting, compliance, and legal research, errors in interpretation or review can expose companies to serious legal and operational consequences.
  • In-House Legal Teams May Lack Tech Readiness: Many corporate counsel are not adequately equipped to assess or mitigate AI-related risks, creating a blind spot in governance.
  • Risk and Audit Must Step Up: The oversight of AI use in legal and executive decision-making is now an essential responsibility for risk and audit professionals, not just a nice-to-have.
Deep Dive

In this article, Norman Marks breaks down the double-edged nature of AI adoption in corporate legal departments, highlighting both the remarkable opportunities for productivity and the underappreciated risks that could undermine sound judgment, legal integrity, and even corporate stability. Drawing on recent industry surveys and personal observations, Marks makes a compelling case for why risk and audit professionals must step up and get involved.

When Legal AI Goes Rogue and Why Risk and Audit Can't Look Away

I recently discovered how some people are projecting that AI will transform the work of corporate counsel. Yes, there are several on how it will transform the work of the law firms, but I am concerned right now with its use by in-house attorneys.

Here is one article that explains the massive productivity and effectiveness gains when AI is used intelligently. It tells us, "The adoption of generative artificial intelligence (Gen AI) tools in corporate legal departments increased sharply in 2024, with 44% of in-house legal leaders reporting they are now using Gen AI (up from 28% a year ago), according to a March 2025 survey by FTI Consulting."

“Most general counsel have indicated openness to using AI in nearly every major legal application,” reported Law360. “One reason behind the increase in comfort with AI might be that general counsel are better prepared for AI risks.”

The opportunities are real and very significant. They merit serious consideration by every legal team, and the CFO (among others) should be pushing hard for the careful adoption of AI for multiple functions such as contract drafting, document review (including for revenue recognition compliance purposes), and research into changes in government regulations and case law.

But the risks are also significant. What scares me is that the in-house attorneys I have worked with are not particularly savvy about technology and related risks, nor experienced in a disciplined process for addressing them.

This article in the Wall Street Journal should fuel our sense of alarm. Read it. The article asserts, “Every week, we hear more reports from around the country about AI bots fueling people’s delusions.”

What would happen if an in-house attorney relied on an AI agent to interpret new case law and got it wrong? What if it was used to review a contract or other document and made a mistake?

This is not just an opportunity for risk and audit personnel to make a difference by helping in-house counsel ensure the reliable and intelligent use of AI, but arguably it’s an essential role.

Do you know how it is used today?

Do you know how they plan to use it tomorrow?

Let’s add to that by considering whether executives are using AI to do their own legal research or provide an opinion they will rely on.

I don’t know about you, but I use AI to give me insights into my medical situation, why I have this or that pain and discomfort. Curiously, the AI has yet to tell me it’s because I’m getting old! But I don’t rely on the AI or even an ‘authoritative’ website. I talk to my medical professionals.

How disciplined are your executives in using AI in running the business? Are they sufficiently disciplined to challenge the results, or do they allow it to feed their bias (or delusions)?

I am worried about relying on AI agents for legal work. But that’s just one of the many areas where executives and others may be using an AI agent instead of common sense, disciplined research, or a professional. Any use of an AI needs to be disciplined. Risks need to be known and addressed.

If your risk and audit professionals are not similarly worried, to the point that a major part of their work is helping address the problem, I am worried.

No, I am scared. Talk about risks that can bring down a company! Are you scared?

The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

Oops! Something went wrong