Key Takeaways:
- Over the last week, the Consumer Financial Protection Bureau (“CFPB”) and the Office of the Comptroller of the Currency (“OCC”) approved the Quality Control Standards for Automated Valuation Models (the “Rule”), which will require mortgage originators and secondary market issuers to ensure that algorithms used for real estate valuation, including artificial intelligence (“AI”) systems (collectively, “automated valuation models” or “AVMs”), are subject to five quality control standards designed to ensure accuracy, protect against the manipulation of data, avoid conflicts of interest, require random sample testing and reviews, and comply with applicable nondiscrimination laws.
- In this Data Blog post, we assess the Rule and offer guidance on how covered entities and financial institutions more generally can take steps toward reducing bias risks related to their use of AI tools.
Over the last week, the Consumer Financial Protection Bureau (“CFPB”) and the Office of the Comptroller of the Currency (“OCC”) approved the Quality Control Standards for Automated Valuation Models (the “Rule”), which will require mortgage originators and secondary market issuers to ensure that algorithms used for real estate valuation, including artificial intelligence (“AI”) systems (collectively, “automated valuation models” or “AVMs”), are subject to five quality control standards designed to ensure accuracy, protect against the manipulation of data, avoid conflicts of interest, require random sample testing and reviews, and comply with applicable nondiscrimination laws. The Rule will go into effect one year after all agencies have provided final approval.
The Rule follows broader federal agency efforts over the past decade since the passage of the Dodd–Frank Act to regulate AVMs. Four of the five quality control factors included in the Rule are consistent with Section 1125 of the Financial Institutions Reform, Recovery, and Enforcement Act. The final factor, compliance with applicable nondiscrimination laws, creates a new, independent obligation for covered entities to establish policies, practices, procedures, and control systems to specifically ensure compliance with such laws.
The Rule’s new factor reflects the Biden–Harris Administration’s focus on addressing discrimination and bias risks in AI policy. President Biden’s Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence directed the Director of the Federal Housing Finance Agency and the Director of the Consumer Financial Protection Bureau to “consider using their authorities . . . to require respective regulated entities” to use appropriate methodologies to evaluate automated appraisal processes “in ways that minimize bias.” In its announcement, the CFPB emphasized that the new Rule is an “example of the CFPB’s work to use existing laws on the books to police potential pitfalls when it comes to artificial intelligence.”
In this Data Blog post, we assess the Rule and offer guidance on how covered entities and financial institutions more generally can take steps toward reducing bias risks related to their use of AI tools.
Background
The Rule was promulgated under Section 1125 of the Financial Institutions Reform, Recovery, and Enforcement Act (as enacted by Section 1473(q) of the Dodd–Frank Act), which directed agencies to develop quality control standards for AVMs that: “(1) ensure a high level of confidence in the estimate produced by automated valuation models; (2) protect against the manipulation of data; (3) seek to avoid conflicts of interest; (4) require random sample testing and reviews; and (5) account for any other such factor that the agencies . . . determine to be appropriate.”
Prior to the introduction of this Rule, the OCC, the Fed, FDIC, NCUA, CFPB, and FHFA provided separate guidance on the use of AVMs.
Application
The Rule governs the use of AVMs by mortgage originators and secondary market issuers who “engage in credit decisions or covered securitization determinations” directly or through a third party. The Rule defines an AVM as “any computerized model used by mortgage originators and secondary market issuers to determine the value of a consumer’s principal dwelling collateralizing a mortgage.” Any covered entity utilizing an AVM must adopt and maintain policies, practices, procedures, and control systems to ensure that the AVM adheres to quality control standards designed to:
- Ensure a high level of confidence in the estimates produced;
- Protect against the manipulation of data;
- Seek to avoid conflicts of interest;
- Require random sample testing and reviews; and
- Comply with applicable nondiscrimination laws.
The Rule also applies to covered entity use of third-party AVMs. However, the Rule does not apply to the use of AVMs in the following scenarios:
- Monitoring of the quality or performance of mortgages or mortgage-backed securities.
- Reviewing of the quality of already-completed determinations of the value of collateral; and
- Developing an appraisal by a certified or licensed appraiser.
A key feature of the Rule’s design is “flexibility” obtained by way of broad, non prescriptive requirements. To accommodate differing compliance needs across institutions of varying sizes, business models, and risk profiles, the promulgating agencies declined to issue additional guidance beyond the general requirements laid out above, and in the Rule Release, encouraged institutions to “review and consider existing guidance” and “refine their implementation of the rule” “to evolve along with AVM technology” in developing AVM quality control policies.
Compliance Takeaways
In the one-year lead-up to the Rule going into effect, covered entities should ensure that their model and AI governance frameworks are appropriately designed for compliance. To ensure compliance with the new nondiscrimination quality control factor in particular, covered entities using AVMs should consider taking the following steps:
-
Ensure Model and AI Governance Programs are Appropriately Designed to Address Risk. The Rule Release acknowledged that covered entities should implement the Rule using measures that appropriately reflect the risks and complexities of the entity’s business and use of AVM technology. Model and AI governance programs, policies, procedures, and practices, should be right-sized in accordance with associated risk. Covered entities should also ensure that they periodically assess their governance programs to adequately address any new risks that may arise due to rapidly evolving AVM technology.
- Prepare for Scrutiny of Quality Control Standards and Document Compliance. Based on this risk assessment, covered entities should ensure their quality control standards are designed for compliance. Existing OCC, the Fed, FDIC, NCUA, CFPB, and FHFA guidance provides suggestions for developing policies, practices, procedures, and control systems designed to ensure the accuracy, reliability, and independence of AVMs and involved data. Covered entities should review this guidance, consider whether any updates to their quality control standards or testing processes are required, and document any updates and sample testing.
- Implement Third-Party Risk Management. The Rule requires the regulated entity to ensure that AVMs used in its valuations are subject to appropriate quality control standards, even if the AVM is developed or operated by a vendor. In addition to existing guidance from federal financial regulators on third-party risk management, covered entities might consider whether additional risk management measures are appropriate, including the following:
- Requiring the vendor to have a written policy on model or AI risk management that applies to the AVM and addresses bias risks; and
- Requiring the vendor to provide bias trainings to relevant employees on detecting and preventing bias in the design and operation of the AVM.
- Consider Whether to Leverage AVM Vendors for Bias Assessments. Although the rule places the burden of compliance on covered entities using AVMs, the vendor providing such AVMs may be best positioned to conduct a bias assessment of the tool for potential noncompliance with nondiscrimination laws. Covered entities should determine whether and the extent to which they should rely on the AVM vendor’s assessment, or whether in-house or other third-party vendor assessment is appropriate. The Rule does not prescribe a specific method for bias assessment. Rather, the Rule Release acknowledged that “an array of tests and reviews” could support the nondiscrimination requirement. At a minimum, these assessments should consider:
- Sample bias, due to use of non-representative data used to train the AVM, which may occur through data collection, data selection, pre-processing;
- Design bias, due to integration of human biases through assumptions made in the development, implementation, operation, and maintenance of the AVM; and
- Proxy bias, due to the AVM relying on proxies for protected classes to make valuation decisions or create other outputs.
- Determine What Other Bias Mitigation Steps Should Be Adopted. In addition to the assessment of AVMs, companies might consider whether additional risk mitigation measures are required. These may include the following:
- Model retraining and testing;
- Human oversight (e.g., human review of inputs or outputs);
- Bias training for employees and vendors who develop, approve, use, monitor, assess, or exercise human oversight over an AVM; and
- Ongoing monitoring and bias assessments.
As the pace of AI adoption increases and federal financial regulators begin to elaborate their policy positions on AI, financial institutions should closely monitor these developments and corresponding industry best practices in anticipation of increased regulatory oversight, particularly with respect to discrimination and bias.
The authors would like to thank Debevoise Summer Law Clerk Henry Maguire for his work on this Debevoise Data Blog.
This publication is for general information purposes only. It is not intended to provide, nor is it to be used as, a substitute for legal advice. In some jurisdictions it may be considered attorney advertising.