Approximately a year after the release of ChatGPT, the promise and peril of generative AI continue to dominate media headlines, corporate roadmaps, boardroom presentations and regulatory agendas. Yet many businesses are still in the early stages of adopting or testing generative AI, as well as more traditional AI solutions. While a number of early adopters are actively promoting the benefits they have obtained from these technologies, others have reported mixed results and even some regrets.
At the same time, the legal and regulatory landscape for AI continues to evolve quickly and unpredictably worldwide. In recent months, several international, state and local jurisdictions and regulatory bodies have introduced measures aimed at regulating AI. There are no signs that this regulatory scrutiny will abate.
As stewards of investor capital, private equity sponsors are faced with a dual challenge of encouraging and supporting their portfolio companies’ use of these potentially transformative technologies, while also ensuring that those companies approach AI initiatives in a responsible, defensible and controlled manner—in other words, in a manner that is most likely to produce value without generating unnecessary or unacceptable risk. These challenges also apply to private equity firms themselves, which are exploring the potential upside of using AI technologies to support their own investment operations while remaining sensitive to the increased demands for risk management. In the Spring 2023 Private Equity Report, we outlined considerations for private equity when managing risks associated with the use of generative AI technologies. As the generative AI risk management programs of private equity firms and their portfolio companies begin to take shape, a key lesson that emerges is the importance of effective AI governance. This article discusses some evolving expectations and strategies for effective AI governance and the role of an AI governance program in both supporting responsible value creation and mitigating the risks associated with AI technologies.
Taking a Risk-Based Approach
The approach to AI governance should be risk-based and allow reasonable flexibility to adapt to based on the benefits and risk of the particular use. An effective, risk-based AI governance program allows companies to safely adopt and oversee the use of new AI technologies as they become available. It also allows companies to triage elevated-risk AI technologies already in use and apply appropriate guardrails to manage those risks.
Different uses of AI pose different levels of risk. For example, using AI to schedule and plan meetings poses very different risks from a use of AI that involves proprietary portfolio company data.
There are four principal factors that should be evaluated in assessing the specific risks associated with any particular AI use case: the AI system being used, the purpose for which the AI system is being used, the relevant data accessed by that AI system and the expected users.
Establishing a Cross-Functional Governance Committee
Because AI risks span a range of substantive areas, private equity firms and any of their portfolio companies that may meaningfully use AI should consider establishing a cross-functional governance committee that either oversees the AI program or establishes other means for establishing accountability. Committee members may include representatives from the business, legal, information security and data analytics.
AI governance committees may be tasked with a range of responsibilities, such as helping set the company’s AI strategy; maintaining the firm’s AI policies and procedures; supporting appropriate training for employees on uses of AI; conducting oversight of the use of AI; and otherwise ensuring that the company’s use of AI is productive, responsible and consistent with applicable legal, regulatory and compliance requirements. These responsibilities are typically documented in a committee charter, which may also detail requirements and expectations for committee work and meetings and reporting channels to senior management and the board.
Creating a Governance Framework
Effective AI governance programs are supported by a framework of policies, protocols and programs, including:
AI Policies. AI policies outline expectations and any prohibitions or limitations on employees’ use of AI. These policies may also address other specific AI risks, including vendor management concerns, ongoing monitoring for quality control, AI-related incident response and data governance.
Training. Appropriate training should be conducted with respect to policies and expectations for individuals involved in developing, monitoring, overseeing, testing, or using elevated-risk AI applications on the associated risks.
Inventory. To exercise risk-based oversight of AI, companies should consider collecting an inventory of information about existing and proposed AI use cases that includes sufficient details about the AI system, its purpose, the relevant data and the expected users to assess the relevant risks.
Elevated Risk Factors. Companies should also consider identifying a list of elevated risk factors that, when presented by a particular AI use case, may trigger enhanced review and assessment processes.
Mitigation Options. Companies may also wish to identify measures that can be implemented, including bias assessments, model testing and validation, enhanced transparency, or additional human oversight, as appropriate, to reduce the risks associated with certain uses of AI. We have previously identified specific mitigation measures for generative AI in the Spring 2023 Private Equity Report.
Documentation. Companies should also consider maintaining documentation about the program, including any reporting to senior management and the board.
The scope and complexity of each of these component policies, protocols and programs may vary, depending on maturity of a company’s AI program.
Reviewing and Assessing AI Use Cases
After establishing a governance framework, the AI governance committee’s next task is typically to review and assess existing AI use cases and put in place a process for identifying and evaluating new use cases as they emerge. The level of review should be based on a company’s view regarding potential risks associated with the particular AI use case. Those involving elevated-risk factors, for example, may require enhanced review and assessment processes, including additional consultation with other company employees, outside experts, or other individuals as appropriate.
The review should result in a determination that the risks associated with the AI use case being evaluated are either (1) acceptable; (2) acceptable, but only if certain risk mitigation measures are implemented; or (3) not acceptable, in which case the proposed use case is not approved to move forward. The process and outcome of the review should be documented and communicated to all relevant stakeholders for the AI use case.
* * *
AI has the potential to create significant business opportunities for private equity firms and their portfolio companies. Designing and implementing an effective AI governance program can support companies in fully realizing these opportunities without unintentional pitfalls that arise because certain risks associated with AI were not properly considered or addressed.