Three steps lenders should take now to stay ahead of AI legislation

Mortgage lenders don’t have the luxury of waiting for AI regulations to settle. While states and Washington argue over who sets the rules, lenders remain fully responsible for how artificial intelligence is used in underwriting, servicing, marketing and fraud detection. The question is no more if AI will be controlled; is whether the lenders are ready when the land comes.
Here are three steps lenders should take now to protect themselves, measure responsibly and avoid being the subject of regulatory scrutiny.
1. Build real AI governance, not just a policy document
AI risk management can’t stay on the slide. Lenders need a formal governance framework that covers all the AI-driven services they use, documents how the models are trained and defines who is accountable for the results.
That includes understanding data sources, monitoring drift and bias and finding ways to escalate when AI outputs affect borrower eligibility, pricing or exposure. Regulators indicated that “relying on the vendor” would not be an acceptable defense. If AI affects the consumer outcome, lenders will own the risk.
Equally important, governance should be practical, not theoretical. Compliance teams, legal, IT and business leaders need shared visibility into where AI is being used, how decisions are being made and how exceptions are being handled in real-time. If governance is cut off from day-to-day work flow, problems arise only after an injury occurs, which is precisely when regulators and plaintiffs’ attorneys start paying attention.
2. Rewrite vendor monitoring before administrators do it for you
Many existing vendor contracts are not written for AI testing. Lenders should strengthen agreements now to address data ownership training, audit rights, bias testing, disclosure and classification of data.
State laws already require lenders to explain automated decisions and document risk analysis, even if AI is provided by third parties. If sellers can’t provide transparency or appraisals, lenders will be exposed. Vendor oversight is quickly becoming a key compliance function, not a procurement function.
This also changes the way lenders should evaluate technology partners going forward. AI readiness is about governance maturity. Sellers who can’t demonstrate responsible model development, ongoing monitoring and regulator-friendly documentation will delay lenders, not lenders. In different regulatory situations, a bad seller can be responsible for overnight compliance.
3. Measure AI deliberately, not everywhere at once
AI doesn’t have to be all-or-nothing. The smartest lenders start with low-risk use cases, such as document classification, workflow automation and fraud detection, while retaining human oversight for high-impact decisions.
This platform approach allows lenders to demonstrate responsible use, collect performance data and improve controls before extending AI deeper into credit use and eligibility. Automation reduces effort, but it does not reduce accountability.
It also creates a trail of evidence that regulators expect to see. By deploying AI incrementally, lenders can document performance benchmarks, differential rates, withdrawal patterns and fairness assessments over time. That data becomes critical when auditors ask and not just what AI does, though why used, How it is monitored again when people intervene.
Lenders who treat AI acquisitions as a controlled process rather than a blanket rollout will be in a better position to protect the results when processing grows.
Why mortgage AI carries high rates
AI uses data, and in mortgage lending, that data is personal, sensitive and controlled. Compliance regulations such as RESPA, TILA and TRID demand accuracy, clarity and strict deadlines. Introducing AI into this workflow without governance does not eliminate risk; it increases it. Small data errors can quickly become compliance violations at scale.
That fact is causing increased regulatory scrutiny of automated decision-making, particularly around fair lending, transparency and consumer impact. Pure models are no longer acceptable, and “black box” explanations will not survive the test.
A different rulebook, for now
In the absence of federal legislation, the states went first. California has expanded its privacy statute to cover automated decisions. Colorado enacted the state’s first comprehensive AI law targeting “high-risk” programs, including credit worthiness tools. Other states are following suit, creating obligations that are difficult for national lenders to handle.
That separation may not last. In December 2025, President Trump signed an executive order directing the federal government to establish a unified national AI framework and challenge state laws deemed to hinder innovation. Legal battles are likely, but the direction is clear: government standards are coming.
Compliance becomes a test of trust
AI management is entering a revolutionary phase. States assert authority. Washington retreats. The courts will determine the boundaries. In all, the lenders remain responsible for the results.
In the age of AI, compliance is no longer just about meeting technical requirements. It is about trust with regulators, investors and borrowers. Lenders who act now, who govern deliberately and measure responsibly will not simply go ahead. They will help define what compliant AI in mortgage lending looks like next.
Geoffrey Litchney is managing counsel and director of compliance at Dark Matter Technologies. As an expert in federal and state lending laws, Litchney’s work focuses on transforming legal, regulatory and privacy requirements into practical, business-friendly solutions that drive innovation. They can be reached at [email protected].
This column does not necessarily reflect the opinion of HousingWire’s editorial department and its owners. To contact the editor responsible for this piece: [email protected].



