The Invisible Third-Party: AI as a Vendor Risk You're Probably Not Managing
Key Takeaways
- AI as an Unmanaged Third Party: Many organizations treat generative AI like a productivity tool when, in practice, it functions like a third party that receives, processes, and may retain corporate data.
- The Real Risk Is Quiet Exposure: Sensitive information including intellectual property, personal data, regulated information, and third-party confidential data can be disclosed through everyday AI use without triggering traditional incident response mechanisms.
- Traditional TPRM Frameworks Are Behind: Existing assessment tools, contracting processes, and review cycles were not built for the speed, opacity, and unique threat profile of AI vendors.
- Organizations Need an AI-Specific Vendor Risk Framework: Effective governance starts with inventorying AI tools, classifying what data can be used with them, applying AI-specific due diligence, and continuously monitoring vendor changes.
- TPRM Has a Chance to Lead: As regulators increasingly expect organizations to govern the AI systems they deploy, third-party risk teams are well positioned to make AI vendor risk a formal, managed part of enterprise oversight.
Deep Dive
Imagine a scenario that unfolds hundreds of times daily across organizations of all sizes and sectors. A senior analyst, facing a tight deadline, pastes the text of a confidential vendor contract into an AI-powered tool. She seeks a quick summary, perhaps highlighting key terms or comparing it with a previous agreement. The tool responds promptly. She gets the information she needs in seconds and moves on.
Nothing explodes. No alarm sounds. No incident ticket is opened. No one in legal, compliance, procurement, or information security receives a notification. And yet, something important has happened. Proprietary contractual terms, pricing structures, counterparty obligations, and possibly the vendor's identity have left the organization's boundary, consumed by a system the organization has never formally evaluated, never set data handling rules for, and never monitored regularly.
This is the quiet side of AI data leakage. It does not announce itself like a ransomware attack would. It doesn't make headlines, trigger regulatory alerts, or cause immediate, traceable consequences. It is unremarkable, gradual, and for most organizations, largely unmanaged.
The challenge is structural, not behavioral. The analyst in this situation isn't doing anything that feels wrong. She is using a tool that her employer hasn't banned, completing a task her manager would approve, and achieving a business outcome that is completely legitimate. The issue is that the tool she used is, by every meaningful definition, a third-party entity, an outside organization that receives, processes, and stores corporate data, and it has never been regarded as one.
This article argues that artificial intelligence systems, AI-powered productivity tools, large language model interfaces, AI-enhanced SaaS applications, and the entire ecosystem of generative AI platforms now integrated into enterprise workflows constitute a class of third parties that most organizations are not managing within their Third-Party Risk Management programs. The governance gap is not due to negligence; it is a result of structural lag: TPRM frameworks, assessment tools, contractual standards, and regulatory guidance were mostly developed before generative AI became a routine part of enterprise operations. That lag needs to be addressed now.
The themes discussed here are explored more thoroughly in my upcoming book, The Future of Third-Party Risk Management and Data Privacy (Taylor & Francis, 2026), which covers the changing relationship between TPRM, data privacy, and new technology risks. This article provides a practitioner-focused overview of how that intersection appears specifically in the context of AI vendor risk, and what organizations need to do about it now.
What Makes AI a Third Party , and Why It Is Different
Third-party risk management is based on a key idea: any outside entity that accesses, processes, stores, or transmits an organization's data or systems on its behalf carries risk that must be evaluated and managed. This definition is intentionally broad, covering outsourced service providers, cloud infrastructure vendors, software-as-a-service platforms, managed security providers, consulting firms, and supply chain partners. It’s this wide scope that gives TPRM its importance as a discipline.
AI tools meet this definition clearly. When an employee uses a generative AI platform to summarize a document, draft a message, analyze data, or generate code, that platform receives the input provided by the employee. It processes that input, usually on infrastructure operated by the AI vendor rather than the employer. It keeps parts of that interaction, in ways and for durations that vary greatly depending on the vendor and the terms of service the employee agreed to, often individually and without enterprise review. In many cases, it uses interaction data to improve or refine its core models, meaning that what an employee types today could influence the outputs the model produces for others tomorrow.
Each of these characteristics—external processing, data retention, and potential data use beyond the immediate transaction—is exactly what makes a third party a subject of TPRM scrutiny. The fact that the interaction happens through a browser window, a mobile app, or an API call instead of a formal service contract does not alter the underlying risk profile. It only affects the governance footprint, and it does so in the wrong direction.
This is the browser-as-procurement-bypass issue, and it existed long before AI. The rise of software-as-a-service created the environment for shadow IT: technology adoption that happens at the individual or department level without involving procurement, legal, or information security. AI has dramatically sped up this trend. For most knowledge workers, the only barrier to using a powerful AI tool is a browser and an email address. These tools are often free at the individual level. They are quick, genuinely useful, and made to be immediately available. They do not require a purchase order, vendor onboarding, or a security review.
What makes AI fundamentally different from the cloud and SaaS vendors that TPRM programs have learned to manage, and why the governance gap is more significant, is a set of characteristics unique to large language models and generative AI systems.
First, the data flows involved with AI usage are unusually opaque. When an employee uses a cloud storage service, the organization generally knows what data is stored and where. However, when an employee uses a generative AI tool, the data flows are much less clear. Input data, such as prompts, documents, and context provided by employees, may be retained in logs, used for model evaluation, incorporated into training pipelines, or processed by human reviewers for quality assurance. These practices differ among vendors, change over time, and are often only disclosed in lengthy, technical terms in service documents that employees rarely read.
Second, AI systems introduce risks that don't have direct counterparts in traditional third-party relationships. Prompt injection attacks, where malicious instructions embedded in processed content cause the AI to perform unintended actions, represent a growing threat with serious implications for enterprises using AI in agentic or automated roles. Model inversion and membership inference attacks, though mostly theoretical at current model sizes, raise the long-term risk that an adversary could extract sensitive information from AI systems through careful querying. These risks are not typically evaluated through standard vendor assessment questionnaires.
Third, and perhaps most importantly for TPRM practitioners, AI vendors iterate at a pace that outstrips conventional risk management cycles. A vendor's data handling practices, model architecture, terms of service, and security posture may change substantially between an organization's annual assessment and its next review. The AI ecosystem is moving faster than the governance frameworks designed to oversee it, and that gap represents a persistent and growing risk.
Finally, the misconception that "it's just a tool" still persists at both the business unit and executive levels. AI tools are often classified alongside productivity software, spreadsheets, word processors, and communication platforms rather than with data processors and service providers that TPRM programs are designed to oversee. This mistaken classification has tangible consequences. It leads to AI tools being purchased without proper due diligence, deployed without data handling agreements, utilized without acceptable use policies that reflect their risk profile, and monitored without the continuous oversight that any major third-party relationship requires.
The Data Privacy Exposure Map
Understanding what is at risk requires mapping the categories of corporate data that employees regularly provide to AI tools during normal business activities. This mapping isn't just a theoretical exercise; it reflects observable patterns of AI use that are present today across nearly every industry, function, and organizational setting.
Category One: Intellectual Property
Strategic plans, unreleased product roadmaps, pricing models, merger and acquisition analyses, research and development data, proprietary algorithms, and competitive intelligence are some of the most sensitive information an organization holds. They are also increasingly the types of materials employees ask AI tools to help them with.
A product manager drafting a go-to-market strategy might use an AI tool to refine the narrative. A corporate development analyst might paste a deal model into an AI platform to identify potential issues. A research team might upload proprietary data to generate summary analyses. In each case, information that the organization has invested significant resources to develop and that represents real competitive advantage has been shared with an external system without any of the controls that would apply if sharing the same information with a human third party.
The exposure risk isn't just that the AI vendor might intentionally misuse the information. The more immediate dangers are that the data could be stored in systems vulnerable to breaches, used in ways the organization didn't expect or agree to, and that the organization lacks the means to audit, retrieve, or delete it afterward.
Category Two: Personal Data
The privacy risks of AI data exposure are especially serious when the data involves individuals, including employees, customers, patients, or any other natural persons whose information is protected by privacy laws.
Human resources professionals who use AI tools to draft performance reviews, analyze engagement survey results, or process hiring materials may unintentionally expose employee personal data. Customer success and sales teams that use AI to analyze account information, draft communications, or summarize call transcripts might reveal customer records. Healthcare organizations whose staff use AI tools to help with documentation or administrative tasks face the risk of exposing protected health information.
The General Data Protection Regulation, the California Consumer Privacy Act, and similar laws in various jurisdictions worldwide impose specific obligations on organizations that process personal data. These obligations also extend to third parties that those organizations engage to handle data on their behalf. An AI vendor that receives personal data through an employee's use of a generative AI tool is, in most regulatory frameworks, considered a data processor or service provider. The responsibilities that come with that relationship—such as data processing agreements, purpose limitation, data subject rights, and breach notification—do not disappear just because the relationship was formed through individual terms of service rather than a formal enterprise contract.
The regulatory exposure here is real and increasing. Supervisory authorities in the European Union have already begun investigations into AI platforms' compliance with GDPR data processing requirements. The risks of non-compliance, such as fines, reputational harm, and the operational challenges of responding to regulatory inquiries, are significant and can be mitigated by TPRM programs that help organizations avoid these issues.
Category Three: Regulated and Sector-Specific Data
Organizations in financial services, healthcare, legal services, defense, and other regulated sectors face additional risks when employees use AI tools without proper governance. The data flowing through these organizations is often governed by sector-specific regulations that set strict rules on how it is handled, where it may be processed, and who can access it.
Financial institutions governed by rules on confidential supervisory information, material non-public data, or customer financial details face specific risks when employees use AI tools not evaluated for compliance with those regulations. Healthcare organizations risk HIPAA violations whenever protected health information is entered into an AI system that hasn’t signed a Business Associate Agreement. Legal departments and law firms encounter professional responsibility issues when privileged communications or work product are shared with AI systems without proper confidentiality analysis.
The sector-specific aspect of AI data exposure is one area where existing regulatory frameworks have begun to respond, although often reactively and incompletely. Several financial regulators have issued guidance addressing AI use by regulated entities. Healthcare regulators have clarified how HIPAA applies to AI-related data processing. However, the pace of regulatory development has not kept up with AI adoption, and the responsibility of filling this gap falls on the organizations themselves and the TPRM professionals managing the risks that gap creates.
Category Four: Third-Party Confidential Data
Perhaps the most underappreciated aspect of AI data exposure is the risk it poses not only to the organization using the AI tool but also to the organization's third parties, such as vendors, partners, customers, and counterparties, whose confidential information is held under contractual confidentiality obligations.
A procurement professional who pastes a vendor's pricing proposal into an AI tool to assist with negotiation analysis may inadvertently expose information that the vendor shared under a non-disclosure agreement. A consultant using AI to help draft client deliverables might reveal client confidential information to a system neither the consultant nor the client has reviewed. Similarly, a law firm associate utilizing AI for document review could disclose opposing party information, client confidences, or materials protected by orders.
The cascading liability implications of this exposure category are substantial. Organizations may face breach of contract claims, professional liability, and regulatory penalties not only due to their own data exposure but also because of the information entrusted to them by others. This aspect of AI data risk has received limited attention in published guidance, yet it poses a significant legal and reputational threat to organizations across nearly every sector.
The Compounding Effect
A key insight that risk professionals must understand is that AI data exposure risks don’t happen in isolation. A single employee prompt often includes data from multiple categories at once. For example, a business development professional working on a partnership proposal might paste into an AI tool a document containing the organization's strategic priorities (intellectual property), the personal contact details of potential partners (personal data), sensitive terms from previous agreements (third-party confidential data), and information about the organization's financial status (regulated data in some cases). The exposure is not just additive; it’s cumulative.
The technical threat vectors increase the risk further. Prompt injection attacks, where malicious content embedded in documents or web pages manipulates an AI tool's behavior, pose an emerging risk for organizations using AI in automated or agent-based contexts. While these attacks are more relevant to AI systems that can perform actions, browse the web, execute code, or send communications, rather than simple question-and-answer interfaces, the range of AI use is rapidly expanding. Organizations developing or implementing AI agents must see prompt injection not just as a theoretical issue but as an active threat surface that needs assessment.
Why Existing TPRM Frameworks Haven't Kept Up
The governance gap in AI vendor risk isn't due to indifference or incompetence by TPRM practitioners. Instead, it stems from a structural mismatch between how quickly AI is adopted and how fast governance frameworks, assessment tools, and contractual standards develop. Recognizing the specific aspects of this mismatch is key to addressing it effectively.
The Assessment Instrument Gap
The Standardized Information Gathering questionnaire, the Consensus Assessments Initiative Questionnaire, and similar third-party assessment tools form the core of most TPRM programs. These tools have been developed and refined over many years to address the risk profiles of traditional third-party relationships, managed service providers, cloud infrastructure vendors, outsourced business process providers, and similar engagements.
They were not designed to assess large language model systems, and it’s evident. Standard questions about data center physical security, network segmentation, access control governance, and business continuity planning are mostly irrelevant to the risk profile of a generative AI platform. The key questions about training data use, model architecture security, prompt data retention, inference attack defenses, and responsible AI governance are missing from traditional tools or only addressed at a surface level.
Several organizations and standards bodies have begun developing AI-specific assessment criteria, and the emerging ISO 42001 standard for AI management systems represents a significant move toward a more structured framework. However, the adoption of these new tools is inconsistent, and many TPRM programs still evaluate AI vendors—when they do at all—using instruments designed for a completely different type of risk.
The No-Contract, No-Assessment Assumption
A common assumption in TPRM practice is that vendor risk management begins only after a contract is signed. Without a contract, there is no vendor relationship, and therefore no assessment requirement. This assumption was problematic even before the AI era; shadow IT has long created unmanaged third-party relationships outside the procurement process, but AI has made this situation unsustainable.
The most popular generative AI tools are freely accessible under individual terms of service. Employees begin using them without approval, legal review, or following organizational decision-making processes that usually trigger a TPRM assessment. By the time information security or compliance teams find out about a specific tool, it may already be deeply integrated into the workflows of many employees.
The practical implication is that TPRM programs cannot rely solely on the procurement process as the trigger for AI vendor assessments. Organizations need alternative ways to identify AI tool adoption, shadow AI discovery processes similar to the shadow IT discovery programs many organizations have implemented over the past decade, and they must be ready to evaluate tools that are not covered by enterprise contracts and for which vendors may not be open to traditional due diligence requests.
The Contractual Gap
When organizations attempt to engage AI vendors at the enterprise level, they often face a contractual environment that is very different from what they are accustomed to managing. The terms of service for major AI platforms are typically non-negotiable for all but the largest enterprise clients. Data processing policies, retention rules, model training opt-out options, and confidentiality commitments might be less favorable than those offered by comparable traditional vendors, and the organization's ability to negotiate alternatives is often limited.
This presents a risk management challenge that is different from the usual issues faced by TPRM practitioners. The question isn't whether the vendor will accept the organization's standard data processing addendum, but whether the vendor's non-negotiable terms are acceptable based on the data the organization plans to share with the platform. This requires TPRM programs to establish AI-specific risk acceptance criteria and provide business units with clear guidance on which types of data can and cannot be used with AI tools operating under standard consumer or small-business terms.
The Regulatory Mapping Gap
The major data privacy regulatory frameworks, including the General Data Protection Regulation, the California Consumer Privacy Act, the Health Insurance Portability and Accountability Act, and their international counterparts, were established to address data processing relationships that existed before generative AI became a key business technology. Applying these frameworks to AI data processing requires careful analysis, and guidance from regulatory authorities remains incomplete and evolving.
Under the GDPR, for example, whether an AI vendor acts as a data processor—which requires a data processing agreement—or as an independent data controller—raising different and potentially more complex compliance obligations—depends on a factual analysis that many organizations have not yet conducted for their AI tool relationships. The answer may differ depending on the tools, use cases, and the specific terms under which the tool is accessed.
TPRM practitioners relying on established regulatory frameworks for third-party data handling must navigate this ambiguity carefully and should consult closely with legal and privacy counsel as the regulatory landscape evolves.
The Speed Problem
Traditional TPRM programs typically conduct annual assessments for most vendors, with more frequent reviews for high-criticality relationships. This schedule was created for a third-party ecosystem where vendor practices, risk profiles, and contractual terms evolve slowly. However, it is not ideal for the AI vendor ecosystem, where significant and rapid changes in data handling practices, model architectures, terms of service, security measures, and organizational structures often occur.
An AI vendor's data retention policy today can differ greatly from what it was twelve months ago. A model that initially didn't use interaction data for training may have since updated its policy. A vendor that was well-capitalized and operated independently might have been acquired. The annual assessment cycle, as usually implemented, can't keep pace with such rapid changes.
Building an AI Vendor Risk Framework: What Good Looks Like
Addressing the AI vendor risk gap requires TPRM programs to adjust their processes, tools, and governance frameworks to fit the unique traits of AI systems. The following framework builds on established TPRM principles and tailors them for the AI environment. It provides a practical starting point for practitioners developing or improving their organizations' AI vendor risk programs.
Step One: AI Tool Inventory , You Can't Manage What You Can't See
The foundation of any effective risk management program is visibility. Before an organization can evaluate, control, or oversee its AI vendor relationships, it must understand what AI tools its employees are using. Currently, for most organizations, this inventory is not available in any comprehensive form, and building it requires active discovery rather than passive monitoring.
Shadow AI detection should utilize multiple data sources and methods. Network traffic analysis can identify connections to known AI platform domains. Endpoint security tools can detect AI application installations and browser extensions. Expense management systems can uncover subscriptions to AI services. Employee surveys and business unit interviews can reveal use cases that technical monitoring alone might miss. Procurement records can identify formally acquired AI tools, which should then be cross-checked against the broader inventory to locate gaps.
The output of this discovery process should be a maintained inventory of AI tools, similar to the software asset inventory that most organizations keep for conventional applications. It should record the tool, the business unit using it, the main use case, the types of data processed, and the contractual basis for access. This inventory must be reviewed and updated regularly, not just as a one-time task.
Step Two: Data Classification Before AI Use
The risk posed by a specific AI tool depends not only on the tool's security and data handling practices but also on the sensitivity of the data employees provide to it. Therefore, a strong AI vendor risk program must be built on a foundation of data classification, understanding what types of data the organization holds, how sensitive each type is, and the handling requirements for each.
Most organizations have some form of data classification policy, but these policies were usually created before AI use became a key concern and might not offer sufficient guidance for AI-related situations. TPRM practitioners should collaborate with data governance and privacy teams to develop AI-specific data handling guidelines that link data classification levels to AI use permissions.
A practical implementation might work as follows: publicly available or non-sensitive internal data can be used with any assessed AI tool. Internal use data may only be used with AI tools that have been formally evaluated and operate under enterprise-level terms. Confidential or restricted data, including personal data, intellectual property, and regulated information, can only be used with AI tools that have signed appropriate data processing agreements and meet specific security and compliance standards. Certain categories of data, such as privileged legal communications, material non-public information, and classified information, cannot be used with external AI tools under any circumstances.
This layered approach provides employees and business units with actionable guidance instead of broad restrictions, and it establishes a solid framework for risk acceptance decisions.
Step Three: AI-Specific Due Diligence Criteria
Assessing an AI vendor requires addressing questions that traditional assessment tools do not cover. TPRM programs need to create or use AI-specific due diligence criteria that address the unique risk features of generative AI systems.
The core questions that AI vendor assessments should address include: How does the vendor use input data? Specifically, are prompts, documents, and other user-provided content used to train or improve the model, and if so, can this be opted out of at the enterprise level? What are the vendor's data retention practices? How long is input data retained, where is it stored, and what rights does the organization have to request deletion? What data residency options are available, and do they meet the organization's regulatory requirements? What access controls govern who within the vendor organization can access interaction data, and under what circumstances? What is the vendor's approach to security for its model infrastructure, and what certifications or audit reports are available? Does the vendor have a responsible AI governance framework, and how are risks such as model bias, hallucination, and prompt injection addressed? What is the vendor's breach notification process, and what are its contractual obligations in the event of a security incident involving customer data?
These criteria should be integrated into a formal AI vendor assessment template that can be applied consistently across the AI tool inventory. When vendors do not respond to assessment requests, as is common with consumer-tier AI tools, the assessment should rely on publicly available information, including terms of service, privacy policies, security documentation, and published responsible AI commitments, supplemented by insights from TPRM industry groups and information sharing organizations.
Step Four: Contractual Requirements for AI Vendors
For AI vendors with whom an enterprise relationship is possible, typically vendors that offer enterprise tiers with negotiable terms, TPRM practitioners should establish a baseline set of contractual requirements that reflect the specific risk profile of AI data processing.
At a minimum, enterprise AI vendor agreements should cover: data processing obligations aligned with applicable privacy laws, including clear commitments on the purposes for using input data and bans on using data for model training without explicit consent; data retention and deletion requirements, including the right to request deletion of organizational data upon contract termination; data residency commitments consistent with the organization's regulatory standards; security obligations, such as maintaining appropriate certifications, the right to audit or review audit reports, and breach notification timelines; confidentiality obligations that include organizational data shared through the AI tool; and indemnification clauses addressing liability from the vendor's data handling practices.
When vendors are unwilling or unable to provide enterprise-level contractual terms, the organization must explicitly decide on the risks, document this decision, and get approval at an appropriate authority level about whether the types of data employees will use with that tool are suitable given the contractual protections available.
Step Five: Continuous Monitoring
The pace of change in the AI vendor ecosystem makes point-in-time assessment fundamentally insufficient as a risk management approach. TPRM programs must implement continuous monitoring of AI vendor relationships that provides ongoing visibility into material changes in vendor risk profiles.
Effective continuous monitoring for AI vendors should include: automated alerts on changes to vendor terms of service, privacy policies, and acceptable use policies, with a defined process for reviewing and responding to material changes; monitoring of vendor security incidents, regulatory actions, and significant organizational events such as acquisitions or leadership changes; periodic review, at least semi-annually, of the AI tool inventory to identify new tools being used and changes in use patterns; and engagement with TPRM and cybersecurity information-sharing communities to stay current on emerging risks and incidents associated with AI platforms.
Several technology solutions are emerging to support AI-specific vendor risk monitoring, and TPRM practitioners should evaluate these tools as they would any risk management technology investment, paying close attention to their coverage of the AI vendor ecosystem, the currency and reliability of their data, and their integration with broader TPRM workflows.
Step Six: Employee Governance
Technical controls and vendor assessments are necessary but not enough. The biggest source of AI data exposure risk is employee behavior—the daily decisions employees make about what information to share with AI tools during their work. Managing that behavior calls for clear policies, effective training, and accountability measures suited to the organization's risk culture.
AI acceptable use policies should go beyond basic principles to provide specific, actionable guidance on what data can and cannot be used with AI tools, how AI outputs should be verified before being used in important decisions, and what reporting requirements apply when an employee becomes aware of potential AI-related data exposure. These policies should be integrated with the organization's broader information security and data governance frameworks, rather than being treated as standalone documents.
Training should be practical and role-specific. The risks associated with AI use vary significantly across different functions. For example, the risks faced by a legal professional are different from those faced by a human resources manager, which are also different from those faced by a software developer. Training that addresses these specific differences will be more effective than generic awareness messages.
Accountability mechanisms should include monitoring AI tool use where technically feasible and legally allowed, incorporating AI responsible use expectations into performance management frameworks, and establishing a clear, proportionate, and consistent process for responding to policy violations that is well communicated.
Integration with the Broader TPRM Program
AI vendor risk does not exist in isolation from an organization's broader third-party risk program. The framework elements described above should be integrated with, not maintained separately from, the organization's existing TPRM infrastructure. AI vendors should be tiered using the same risk-based methodology applied to conventional vendors, with tiering decisions informed by the sensitivity of the data processed and the criticality of the use case. AI-specific assessment criteria should be incorporated into the standard assessment workflow rather than maintained as a parallel process. AI vendor risk reporting should be integrated into the standard TPRM dashboard and executive reporting frameworks.
The goal is to develop a TPRM program that treats AI vendors as what they are—third parties with a unique risk profile that requires tailored assessment and monitoring approaches—rather than as a separate category needing its own program. Integration is not only more efficient; it is more effective because it ensures that AI vendor risks are evaluated within the broader third-party risk landscape of the organization and managed with the same rigor as other major vendor relationships.
The Regulatory Horizon: What's Coming and What It Means for TPRM
The regulatory environment surrounding AI data management is evolving quickly, and this development has significant consequences for TPRM practitioners and the organizations they support. Although it's hard to predict specific regulatory outcomes with certainty, several trends are clear enough to guide current governance choices.
The EU AI Act
The European Union's AI Act, which came into effect in 2024 and is being phased in over the following years, establishes a risk-based regulatory framework for AI systems that has direct impacts on enterprise AI governance and, consequently, on TPRM. The Act requires specific obligations for providers and deployers of AI systems classified as high-risk, a category that includes AI used in employment decisions, credit assessment, access to essential services, and other significant contexts, including requirements for risk management systems, data governance, human oversight, and transparency.
For TPRM practitioners, the AI Act's importance lies in several key areas. First, it explicitly sets due diligence requirements for organizations deploying high-risk AI systems, including obligations to review technical documentation and conformity assessments of AI system providers, which directly align with the AI vendor assessment framework described earlier. Second, it establishes data governance standards for AI systems that interact with the existing GDPR data processing framework, requiring organizations using AI to process personal data to carefully analyze these regulations. Third, it offers a model for AI regulation that will likely influence other jurisdictions, meaning organizations that develop compliance capabilities for the EU AI Act will be better prepared to adapt as similar frameworks are introduced elsewhere.
US Federal and State Developments
The United States federal legislative landscape for AI regulation remains less clear than the European framework, with comprehensive federal AI laws not yet enacted as of now. However, several regulatory actions at both the federal and state levels have significant implications for enterprise AI governance.
At the federal level, sector-specific regulators have been active in extending existing frameworks to address AI-related risks. Financial regulators have issued guidance on AI use in credit decisions, fraud detection, and customer service. Healthcare regulators have clarified how HIPAA applies to AI-related data processing. The Federal Trade Commission has indicated that existing consumer protection authorities also cover AI systems that engage in deceptive or unfair practices. The Securities and Exchange Commission has, through its cybersecurity disclosure rules, set expectations that material AI-related risks, including data privacy risks from AI use, will be disclosed to investors.
At the state level, more jurisdictions are enacting or considering AI-specific laws focused on transparency, bias evaluation, and data management. Colorado, Illinois, and Texas have passed laws related to AI in employment decisions. Several states have implemented or proposed legislation specifically targeting AI use in insurance underwriting. California continues to develop comprehensive AI governance standards. The patchwork of state rules adds compliance challenges for organizations operating in multiple areas and highlights the importance of creating AI governance programs that are strong and flexible, rather than narrowly tailored to any single regulation.
The Emerging Contractual and Assessment Standards Landscape
Beyond formal regulations, the AI vendor risk field is witnessing the rise of voluntary standards and assessment frameworks likely to influence practices in the coming years. The International Organization for Standardization's ISO 42001 standard for AI management systems offers a structured framework for AI governance that vendors can use to showcase the quality of their governance practices and that organizations can reference when evaluating vendors. The Cloud Security Alliance has created AI-specific security guidance that complements its existing cloud security frameworks. NIST's AI Risk Management Framework provides a comprehensive vocabulary and approach for AI risk governance that is increasingly cited in both voluntary guidance and regulatory standards.
For TPRM practitioners, the practical implication of these developments is that AI-specific certifications, audit reports, and conformity assessments are beginning to emerge as credible evidence of AI vendor governance quality, similar to the role that SOC 2 reports and ISO 27001 certifications serve in traditional vendor assessments. Gaining familiarity with these emerging standards helps TPRM practitioners evaluate AI vendor compliance claims more effectively and establish credible requirements in vendor contracts.
What Regulators Are Beginning to Expect
Across jurisdictions and regulatory frameworks, a common theme is emerging: regulators are starting to hold organizations accountable for the AI systems they use, not just the ones they develop. This principle—that deploying an AI system creates obligations similar to those from using any third-party service—has direct implications for TPRM. It means that the "we didn't build it, so we're not responsible for it" stance that some organizations have taken regarding AI data handling is unlikely to hold up as a regulatory defense.
TPRM practitioners who develop strong AI vendor risk programs now will be in a better position not only to handle the current risk environment but also to show regulators, auditors, and senior leaders that the organization has adopted a systematic and justifiable approach to a risk that regulators are increasingly viewing as critical.
The Third-Party Risk Profession's Defining Moment
Third-party risk management as a formal discipline has faced several key turning points in its relatively brief history. The outsourcing boom of the 1990s and early 2000s set the stage for widespread third-party risk, as organizations delegated major operational tasks to external providers without always grasping the associated risks. The rise of cloud computing in the following decade introduced a new type of third-party risk, as data and workloads moved to infrastructure outside of organizations' direct control. The rapid growth of software-as-a-service led to the shadow IT challenge that TPRM programs have spent years learning to address.
At each of these inflection points, the TPRM profession adapted. New assessment frameworks were created. New contractual standards appeared. Regulatory guidance evolved. The discipline matured. The organizations that navigated these transitions most successfully were those that viewed each inflection point not as a threat to existing processes, but as a chance to improve the rigor and sophistication of their risk management approach.
Generative AI marks the next major turning point, and it is a significant one. The level of AI adoption and the speed at which AI tools have become integrated into enterprise workflows across industries and functions surpass any previous technology wave. The variety of risks it introduces—involving data privacy, intellectual property, regulatory compliance, contractual obligations, and emerging technical threat vectors—is broader than most challenges TPRM programs have faced before. Additionally, the pace of change in the AI ecosystem, including how quickly vendor practices, model capabilities, regulatory requirements, and threat landscapes are evolving, is faster than the discipline has ever had to manage.
But the essential principles of effective third-party risk management remain the same. Know your vendors. Understand what data they access and how they manage it. Evaluate their controls and governance practices. Set up proper contractual protections. Continuously monitor their risk profile. Manage employee and business unit behavior through clear policies and effective training. Incorporate AI vendor risk into the broader risk management framework instead of treating it as a separate issue outside normal governance processes.
Practitioners who develop AI vendor risk frameworks now, who treat every AI tool as a third party unless there's solid reason to think otherwise, who create the assessment criteria, contractual standards, and monitoring approaches that the field currently lacks, and who advocate within their organizations that AI vendor risk is a TPRM issue rather than just a technical or compliance matter, will shape the standards and practices adopted by the broader profession. They will be the founders of the frameworks that others will adopt and expand upon.
That is a meaningful opportunity. The TPRM profession has earned its place at the enterprise risk management table by demonstrating, repeatedly and across evolving technology landscapes, that it can identify and manage risks that other functions overlook or underestimate. AI vendor risk is precisely the type of risk that TPRM is best suited to address—visible in the third-party relationship, receptive to assessment and contractual management, and significant enough to deserve the ongoing organizational focus that only a formal risk management program can provide.
The invisible third party is actually quite visible to those who know where to look. The question is whether TPRM programs will choose to look and act before the risks they can already see turn into the incidents they will later be asked to explain.
About the Author
Norman J. Levine, CISA, CDPSE, is the Founder and Principal Consultant of Cyber Risk Partners LLC, with more than 20 years of experience in third-party risk management and cybersecurity at Fortune 500 organizations. He serves on cybersecurity advisory boards at Pace University's Seidenberg School of Computer Science and Information Systems and Seton Hall University's Stillman School of Business. He is the author of The Future of Third-Party Risk Management and Data Privacy, forthcoming from Taylor & Francis in 2026.
The GRC Report is your premier destination for the latest in governance, risk, and compliance news. As your reliable source for comprehensive coverage, we ensure you stay informed and ready to navigate the dynamic landscape of GRC. Beyond being a news source, the GRC Report represents a thriving community of professionals who, like you, are dedicated to GRC excellence. Explore our insightful articles and breaking news, and actively participate in the conversation to enhance your GRC journey.

