Managing AI-Related Risks Associated with Vendors

December 2024

As private equity firms ramp up their AI adoption, one of the most difficult challenges they face is managing third-party risk. In practical terms, a great deal of AI adoption involves engaging third-party vendors to provide AI-enabled products or services. Firms often struggle when deciding what diligence to perform on these vendors and how to mitigate—through contractual conditions or other means—the risks identified in the diligence process.

Third-party diligence issues are especially salient given the SEC’s 2025 Examination Priorities, which discuss the close scrutiny Registered Investment Advisers (RIAs) can expect from the SEC regarding any outsourcing of investment selection or management functions—as well as regarding an RIA’s integration of AI into advisory functions, including portfolio management, trading, marketing and compliance. The SEC’s proposed rules on Outsourcing by Investment Advisers (“Proposed Outsourcing Rules”) and Cybersecurity Risk Management for Investment Advisers, Registered Investment Companies and Business Development Companies (“Proposed Cybersecurity Rules”) also emphasize the importance of vendor risk management for RIAs. Accordingly, as RIAs increasingly rely on AI to assist with their decision-making and investment processes, they must ensure that their risk and diligence procedures account for AI vendors that have the potential to cause a material adverse impact on the adviser’s clients or on the adviser’s ability to provide investment advisory services.

Establishing an AI Vendor Risk Management Program

Different AI vendors will present different levels of risk, depending on the nature of the product and how the product fits into the firm’s workflow. An effective, risk-based third-party AI risk management program therefore centers on efficiently identifying, assessing and mitigating risks associated with each AI vendor. As the SEC’s Proposed Outsourcing Rules observe, “due diligence should be reasonably tailored to the identified service provider and to the functions or services to be outsourced.” Firms constructing an AI vendor risk management program should consider the following:

  1. Determining Program Scope. Vendors can use AI in a variety of ways. A vendor may provide AI models for direct use by the firm, or it may provide software products that incorporate AI-enabled features but do not allow users any control of the underlying models. Other vendors may leverage AI on their own systems to provide goods and services to the firm without the firm having any interaction with those systems. An AI vendor risk management program should define its scope against this range of possibilities.    
  2. Defining AI. Part of determining the program’s scope is defining what is meant by “AI.” Will the program apply only to circumstances involving generative AI, or will it cover a broader range of machine-learning technologies? The firm should also consider whether to cover vendor offerings such as algorithmic models that do not leverage AI or machine learning but that might present similar reputational or regulatory risks.
  3. Integrating with Other Diligence Programs. Many of the risks associated with AI vendors overlap with risks addressed through cybersecurity and data privacy diligence, including maintaining confidentiality, access to data, sharing sensitive data with third parties, and deletion of data when it is no longer needed. Firms should consider whether AI and cyber diligence remain separate (while eliminating redundancies between them) or if they should be integrated as part of a comprehensive technology diligence process. This issue is particularly critical in light of the Proposed Outsourcing Rules’ requirements for vendor diligence, under which firms must conduct diligence on any vendors that perform “covered functions,” and identify how to mitigate potential risks posed by such vendors.
  4. Program Standardization. Many AI-related risks—such as those associated with intellectual property, confidentiality, cybersecurity and quality control—will be applicable to a wide range of AI vendors. But other risks, such as the risk of bias or discrimination, may arise only for certain tools and in certain use cases. Such variation may limit a firm’s ability to standardize components of its AI vendor risk management program, such as diligence questionnaires or model contract provisions. This limitation means that firms may have to tailor their AI diligence to the risks presented by the particular category of vendor, so the onboarding of low-risk vendors is not impeded by an inappropriately high level of scrutiny (which can result in circumvention of controls or so-called “shadow IT” risks).
  5. Managing the Risks of New Features for Existing Services. Many vendor AI services are part of existing software packages covered by existing contractual agreements. It can be a challenge to determine when the release of new AI features should be treated as a new procurement or engagement that requires renewing or revisiting that vendor’s diligence and risk management analysis. This question is complicated by the fact that vendor updates are not timed to coincide with contract cycles. As a result, even if a firm does choose to revisit the risk analysis for a particular vendor, it can be difficult or even impossible to act on the results of the analysis in the middle of an ongoing engagement with a pre-defined term of service.
  6. Distinguishing Between Tool Risk and Use Case Risk. Some AI tools are built for specific use cases and thus will involve specific risks (such as the regulatory compliance of a resume screening tool) that should be addressed in the vendor onboarding process.  But other tools are more in a general purpose category, and the associated risks are dependent on the particular use cases that emerge only after the tool has been onboarded. Likewise, there are some risks (such as risks related to accuracy and reliability) that require longer study and use in order to fully understand and mitigate, making those risks hard to address prior to engagement. It is tricky but therefore important to sort out which risks are better addressed through the vendor risk management process and which are better left to be mitigated through ongoing AI governance.  
  7. Integrating with SEC Compliance. Sponsors also need to consider how their AI vendor risk management aligns with the SEC’s current expectations and potential new regulatory requirements. This includes, for example, identifying which AI vendors may be subject to the Proposed Outsourcing Rules because they provide “covered functions,” and which AI vendors have access to RIA information or systems such that they are subject to the Proposed Cybersecurity Rules. 

Managing Identified AI Vendor Risks

Depending on the answers to the questions above, firms should consider whether their existing third-party risk management structures are sufficient (in terms of resources, expertise, scope and mandate) to assess the risks presented by AI vendors. They also should consider the range of procedural, contractual, technical and other mitigations that may be deployed to lessen the risks identified during their vendor risk management process.

Some steps to consider when identifying effective mitigations include:

  1. Conducting Internal Diligence. As part of performing diligence on the AI vendor itself, consider conducting internal diligence as to why the vendor’s services—and, specifically, their AI-enabled products or services—are necessary. This includes mapping intended use cases, the data to be used, best-case and worst-case scenarios, how success will be defined for the engagement and whether there will be a pilot program.
  2. Itemizing Risks, Diligence and Terms. Consider creating a checklist of potential risks that the firm will contemplate when engaging an AI vendor.  For each risk that can be addressed through contract, consider whether it is possible to have a playbook with model diligence questions, ideal contract terms and acceptable fallback terms.  Also consider organizing these risks into standard risks (those that will be addressed for all AI vendor engagements) and nonstandard risks (those that will only need to be addressed in specific contexts), and identify which risks are covered by other diligence efforts (cyber, privacy, etc.) as opposed to those risks that are only addressed through AI-specific diligence.  Finally, consider whether there are any risks (such as regulatory compliance with hiring, lending or biometric laws) that will require review and sign off from specific subject-matter experts, such as the legal team, compliance staff or HR.
Identifying Noncontractual Mitigations. In certain circumstances, firms may decide to move forward with an AI vendor even if there are identified risks that have not been (or cannot be) fully mitigated via contract or through diligence. For these residual risks, firms should consider whether there are non-contractual measures (including technical or operational measures) that can be implemented at the use-case stage as further mitigants. For example, to minimize risk associated with allowing an AI vendor to process sensitive data using an AI system, firms may want to consider functional means of preventing such data from being exposed to the vendor in the first place. Or, to address business continuity risks associated with key vendor-supported AI systems, firms may want to consider creating business continuity plans featuring backups or workarounds that will allow them to meet obligations in the event of a vendor disruption.

The Private Equity Report Fall 2024, Vol 24, No 3