Artificial intelligence is becoming increasingly central to businesses across the global economy. With AI having been used for some time in specific areas such as risk management models for insurance companies, fraud detection by credit card companies and customer support chatbots, companies in a wide range of sectors are now racing to harness AI to enhance their offerings and streamline their operations—everything from generating computer code to developing drugs to pricing real estate.
As companies build their own AI tools, use third-party AI applications and harness AI-generated content, AI-related assets are becoming a source of both value and risk for private equity sponsors evaluating these companies as investment targets. This article reviews some of the key issues sponsors should consider as they incorporate AI factors into their due diligence.
Due diligence of a target company’s AI begins with determining what types of AI technologies are being used by the company, for what purpose, and the value of AI to the relevant business—and therefore to its investors. Due diligence strategy is further shaped by whether the target company is developing and using its own proprietary AI model (including whether it is using the model for the benefit of third parties, such as customers), if the company is relying on or sourcing AI-generated output from a third party, and/or whether the company is providing data for use in training third-party AI models.
Rights and Regulations Considerations
AI models are built on algorithms trained on large collections of data, called training data, to produce output responsive to prompts provided by a user, which could be a business or its customers. The three separate parts of the AI workflow—the AI model itself, the training data inputted into the model, and the output generated by the model—each bring their own diligence considerations regarding rights and regulations.
- The AI model. If a target company has developed its own AI model, a sponsor should seek clarity as to how the company protects its model from unlicensed use by third parties to maintain value and market advantage. In the United States, copyright, patent, and trade secret laws may provide some protection for proprietary AI tools (e.g., software algorithms and data compilations), but obtaining such protection presents unique challenges. Key considerations include who developed the model and whether the company has appropriate documentation in place with all individuals who contributed to the model’s development (such as IP assignment or work made for hire agreements with employees and contractors), as well as employee handbooks and other written policies addressing confidentiality and authorized use of the company’s proprietary information.
- Training data. A private equity sponsor evaluating a target company with a proprietary AI model should identify the sources of training data and confirm that the target company has all necessary rights to use the training data as such datasets are used and intended to be used by the company in its model. That inquiry should address intellectual property rights as well as rights in any potentially sensitive data. The sponsor should also evaluate the risk that the target will not be able to acquire rights to additional high-quality training data in the future, which could impact the long-term viability of the target’s AI model.
- AI output. Even if a target has secured the necessary rights in the training data for use with its AI model, rights in the AI output must be considered as well. For example, if a company is licensed to use for training data third-party content that is protected by copyright, the AI output generated by that training data is not necessarily covered by the same license. If the AI output includes reproductions or derivatives of copyrighted content, use of the output that is not expressly covered by the license might infringe the third party’s rights. A team conducting diligence on the target company’s rights with respect to AI should review the company’s contracts with third parties that provide or have provided training data for the model to confirm that the company has sufficient rights to use and/or own the output and, to the extent applicable, make such output available to its customers.
In addition to IP issues, companies should also view their training and output data through the lens of existing and evolving data privacy laws applicable to the collection, use, and other activities performed on applicable data and the mechanisms to protect that data. When companies are feeding training data into a vendor’s AI model rather than something proprietary, they should be aware of how that input data is being used (which includes ensuring the vendor does not use such data for purposes outside of providing the services to the company) and protected by the AI vendor, particularly where necessary to ensure compliance with data privacy or other regulatory obligations.
Finally, companies must remember that the laws, regulations, and standards for AI also include ethical considerations relating to bias, transparency, and accountability. A private equity sponsor should confirm that the target company has a framework for addressing AI matters and associated risks, including a plan to update the company’s AI-related policies and practices as laws and regulations evolve.
Litigation Considerations
Litigation over AI has so far centered primarily on AI training data and output, with courts evaluating a number of lawsuits alleging both that AI training data impermissibly contained copyrighted works, and that the AI output was an unauthorized derivative work. Many of the plaintiffs in these cases have been the owners of copyrighted works, but we have also seen plaintiffs who own trademarks that were produced as part of an AI model’s output (like Getty Images and The New York Times) bring trademark claims as well. So far, courts have been very skeptical of infringement claims regarding AI output, especially where plaintiffs have not been able to tie the AI models’ output to specific training data. But plaintiffs have been more successful—at least in preliminary litigation stages—with claims based on copyrighted training data.
Litigation challenging AI models is still in the very early stages, and while it seems unlikely to represent an existential threat to the AI industry, it can nevertheless pose challenges to AI companies and their customers. We expect significant litigation over the legality of AI models and their training data, as well as the scope of fair use defenses, that could take the courts decades to sort out—during which time this technology will continue evolving.
Cybersecurity and Risk Allocation Considerations
Because AI models depend on large datasets of training data that are fed into algorithms and software, both AI vendors and proprietary AI models are susceptible to performance failures, data breaches, and other cyber attacks and incidents. Sponsors should thus review the target company’s internal practices, including its policies and procedures (including how employees are trained), and contracts with third parties relating to the security and protection of both the AI tool itself and the relevant input and/or output. Whether the AI platform is proprietary or that of a third party, the company should take appropriate measures to maintain and protect the software, systems, and servers that house the AI model, training data, any user-provided input data, and output, as applicable.
Sponsors should also examine how risk has been allocated between the target company and third parties and whether the target company bears potentially significant risk associated with its use of AI models, input and training data, and output (e.g., if the output or use thereof infringes a third party’s rights). Many companies are now requiring representations of non-infringement and data sourcing rights to get greater assurance that the third party AI model does not infringe on third-party IP rights. The relevant contracts should clearly allocate risk through express obligations, representations and warranties, and indemnities addressing the issues discussed above, including system security and performance, ownership and rights to use AI input and output, non-infringement and other violations of intellectual property and other third party rights, and compliance with applicable data privacy laws and ethical standards regarding use of AI.
Conclusion
While development of AI tools and use of AI output has the potential to create substantial value for a target company, a private equity sponsor should carefully evaluate the company’s practices with respect to AI to ensure that any potential risk exposure does not undermine the value of the sponsor’s investment.
Private Equity Report Spring 2024, Vol 24, No 1