Key takeaways:
- Several recent developments provide new insight into the future of artificial intelligence (“AI”) regulation.
- This is the third post in our series on what these recent AI developments mean for the future of AI regulation.
- In this post, we discuss the Federal Trade Commission’s (“FTC”) recent guidance on how existing laws apply to AI and the FTC’s advice for how companies using AI can avoid claims of unfair or deceptive practices, including bias and discrimination.
- In an upcoming post, we will provide a list of steps that companies can take to limit the risks of developing AI tools that will be viewed as noncompliant in the future global AI regulatory landscape.
In our first post in this series on the future of AI regulation, we discussed the recent request for information (“RFI”) from U.S. federal banking regulators on the use of AI. Our second post addressed the European Commission’s draft AI legislation. In this third installment, we discuss the Federal Trade Commission’s (“FTC”) recent blog post entitled “Aiming for truth, fairness, and equity in your company’s use of AI,” which was released on April 19, 2021.
The FTC’s Blog Post on Truth, Fairness, and Equity in AI. The FTC’s blog post follows the Commission’s guidance issued in 2020 on “Using Artificial Intelligence and Algorithms,” which we previously discussed on our webcast with Andrew Smith, head of the FTC’s Bureau of Consumer Protection. As Mr. Smith emphasized, the FTC’s enforcement actions and guidance both emphasize that the use of AI should be transparent, explainable, fair, empirically sound, and accountable. More recently, FTC Commissioner Rebecca Kelly Slaughter remarked that “[i]ncreased accountability means that companies—the same ones who benefit from the advantages and efficiencies of algorithms—must bear the responsibility of (1) conducting regular audits and impact assessments, and (2) facilitating appropriate redress for erroneous or unfair algorithmic decisions.”
The FTC’s new post may be a preview of its approach to AI enforcement in the Biden Administration. In contrast to EU’s lengthy and comprehensive draft legislative framework, which proposes an array of new AI regulations, the FTC’s two-page document focuses on how existing U.S. laws prevent the use of AI that is biased or unfair. According to the FTC, those laws include:
- Section 5 of the FTC Act prohibits unfair or deceptive practices, which the FTC notes, includes the sale or use of racially biased algorithms.
- The Fair Credit Reporting Act, which prohibits the use of AI to unfairly deny people employment, housing, credit, insurance, or other benefits.
- The Equal Credit Opportunity Act, as well as its implementing Regulation B, which prohibits the use of a biased algorithm that results in credit discrimination based on protected classes, such as race or sex.
Drawing on past hearings, investigations, and enforcement actions, the FTC offers the following seven lessons on using AI truthfully, fairly, and equitably:
- Use complete and representative data sets to design AI models. If a data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups.
- Test algorithms for discriminatory outcomes before using them and periodically thereafter.
- Make your use of AI transparent and available for independent reviews by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening your data or source code to outside inspection.
- Don’t exaggerate about what your algorithm can do or whether it can deliver fair or unbiased results.
- Be truthful and upfront about how you use data. A business’s AI model shouldn’t derive from consumer data unless the business was authorized to collect and use such data.
- Use AI models that do more good than harm to consumers. The FTC may challenge a business’s use of an AI model “if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.”
- Take accountability for how your AI models perform. The FTC indicates in the blog post that it will take action against businesses using algorithms that it determines are biased and result in credit discrimination.
Recent Enforcement Actions and the Ability of the FTC to Destroy Models. In discussing the proper use of personal data to train AI models, the FTC references its settlement with the photo app developer Everalbum, Inc., which we discussed in detail in a previous blog post. In its complaint, the FTC alleged that Everalbum represented to its users (i) that they must affirmatively opt in to enable the app’s facial recognition settings, and (ii) that Everalbum deleted users’ photos and videos whenever users deactivated their accounts. Both of these representations, the FTC alleged, were false and deceptive. As part of the settlement, Everalbum was required to delete the data that it had collected and retained without users’ consent. More importantly, the settlement also required the destruction of any facial recognition models or algorithms that Everalbum developed using users’ photos and videos that were collected through deceptive means. As we noted in our previous blog post, this is a very powerful enforcement tool in AI cases.
The FTC’s guidance coincides, however, with the Supreme Court’s recent decision curtailing the FTC’s remedial authority to seek monetary relief. In a unanimous decision issued on April 22, 2021, the Supreme Court ruled that Section 13(b) of the FTC Act does not grant the FTC the authority to recover restitution or disgorgement for ill-gotten gains in civil enforcement actions. As we discussed in a recent blog post, the ruling’s limitation on monetary relief may impact settlement negotiations and other forms of relief that the FTC seeks in future enforcement actions. The FTC may, for example, start relying more on other forms of redress, such as financial recovery through the administrative process, injunctive relief through court orders, and settlements requiring destruction of algorithms developed from biased data and of data collected through deceptive means.
Algorithmic Discrimination and Unfairness Laws. Perhaps the most notable part of the FTC’s blog post is its warning to companies that “[i]f your model causes more harm than good – that is, in Section 5 parlance, if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition – the FTC can challenge the use of that model as unfair.”
In addition to the FTC, the authority to prevent unfairness provides a potential avenue for AI enforcement to state attorneys general charged with enforcing their Unfair and Deceptive Acts and Practices (“UDAP”) statutes. Some of these state consumer protection laws include private rights of action, which may present opportunities to private plaintiffs seeking to challenge allegedly discriminatory, deceptive or unfair uses of AI.
Takeaways. The FTC’s blog post is consistent with the current approach that we’ve seen from U.S. regulators on AI, which is to:
- Gather information from companies through RFIs or regulatory exams on their use of AI and the measures they are implementing to reduce bias and other risks;
- Remind companies that existing laws apply to AI, and that no new regulation is needed for them to bring enforcement actions against companies that use AI that is biased against protected classes or that use data in violation of privacy obligations; and
- Issue guidance on what they view as uses of AI that violate existing laws and bring enforcement actions against those companies that act contrary to that guidance.
In an upcoming installment in this series on the Future of AI Regulation, we will provide a list of steps that companies can take now to limit the risks of developing AI tools that will be viewed as noncompliant with the global AI regulatory landscape that is likely to take shape over the next few years. Those steps will cover overall governance, as well as:
accountability
|
documentation
|
regulatory disclosures
|
appeal rights
|
escalation of incidents
|
risk assessments
|
bias testing
|
guardrails
|
training
|
board reporting
|
human oversight
|
transparency
|
business continuity
|
AI inventories
|
explainability
|
cybersecurity
|
ongoing monitoring
|
vendor management
|
|
privacy protection
|
|
Please join Avi Gesser and Anna Gressel for a special edition of our DSS Webcast on Monday, May 3, 2021 at 10:00am ET on the Future of AI Regulation, as well as steps that companies can adopt now to prepare for the rapidly evolving AI regulatory landscape. You can register for the live webcast here, and for an on-demand recording here.
* * *
To subscribe to the Data Blog, please click here. Please do not hesitate to contact us with any questions.