Loading...

Three Reasons Why the Mortgage Process Is Ripe for Change

by Guest Contributor 4 min read November 10, 2020

No one can deny that the mortgage and real estate industries have been uniquely affected by COVID-19. Social distancing mandates have hindered open house formats and schedules. Meanwhile, historically low-interest rates, pent-up demand and low housing inventory created a frenzied sellers’ market with multiple offers, usually over-asking. Added to this are the increased scrutiny of how much borrowers will qualify and get approved for with tightened investor guidelines, and the need to verify continued employment to ensure a buyer maintains qualifying status through closing.

As someone who’s spent more than 15 years in the industry and worked on all sides of the transaction (as a realtor and for direct lenders), I’ve lived through the efforts to revamp and digitize the process. However, it wasn’t until recently that I purchased my first home and experienced the mortgage process as a consumer. And it was clear that, for most lenders, the pandemic has only served to shine a light on a still somewhat fragmented mortgage process and clunky consumer experience.

Here are three key components missing from a truly modernized mortgage experience:

Operational efficiency
Knowing that the industry had made moves toward a digital mortgage process, I hoped for a more streamlined and seamless flow of documents, loan deliverables and communication with the lender. However, the process I experienced was more manual than expected and disjointed at times.

Looking at a purchase transaction from end to end, there are at least nine parties involved: buyer, seller, realtors, lender, home inspectors/inspection vendors, appraiser, escrow company and notary. With all those touchpoints in play, it takes a concerted effort between all parties and no unforeseen issues for a loan to be originated faster than 30 days. Meanwhile, the opposite has been happening, with the average time to close a loan increasing to 49 days since the beginning of the pandemic, per Ellie Mae’s Origination Insights Report. Faster access to fresher data can reduce the time to originate a mortgage. This saves resource hours for the lender, which equates to savings that can ultimately be passed down to the borrower.

Digital adoption
There are parts of the mortgage process that have been digitized, yes. However, the mortgage process still has points void of digital connectivity for it to truly be called an end-to-end digital process. The borrower is still required to track down various documents from different sources and the paperwork process still feels very “manual.” Printing, signing and scanning documents back to the lender to underwrite the loan add to the manual nature of the process. Unless the borrower always has all documents digitally organized, requirements like obtaining your W-2’s and paystubs, and continuously providing bank and brokerage statements to the lender, make for an awkward process.

Modernizing the mortgage end-to-end with the right kind of data and technology reduces the number of manual processes and translates into lower costs to produce a mortgage. Turn times are being pushed out when the opposite could be happening. A streamlined, modernized approach between the lender and consumer not only saves time and money for both parties, it ultimately enables the lender to add value by providing a better consumer experience.

Transparency
Digital adoption and better digital end-to-end process are not the only keys to a better consumer experience; transparency is another integral part of modernizing the mortgage process. More transparency for the borrower starts with a true understanding of the amount for which one can qualify. This means when the loan is in underwriting, there needs to be a better understanding of the loan status and the ability to better anticipate and be proactive about loan conditions.

Additionally, the lender can profit from gaining more transparency and visibility into a borrower’s income streams and assets for a more efficient and holistic picture of their ability to pay upfront. This allows for a more streamlined process and enables the lender to close efficiently without sacrificing quality underwriting.

A multitude of factors have come into play since the beginning of the pandemic – social distancing mandates have led to breakdowns in a traditionally face-to-face process of obtaining a mortgage, highlighting areas for improvement. Can it be done faster, more seamlessly? Absolutely.

In ideal situations, mortgage originators can consistently close in 30 days or less. Creating operational efficiencies through faster, fresher data can be the key for a lender to more accurately assess a borrower’s ability to pay upfront. At the same time, a digital-first approach enhances the consumer experience so they can have a frictionless, transparent mortgage process. With technology, better data, and the right kind of innovation, there can be a truly end-to-end digital process and a more informed consumer.

Learn more

Related Posts

Model inventories are rapidly expanding. AI-enabled tools are entering workflows that were once deterministic and decisioning environments are more interconnected than ever. At the same time, regulatory scrutiny around model risk management continues to intensify. In many institutions, classification determines validation depth, monitoring intensity, and escalation pathways while informing board reporting. If classification is wrong, every downstream control is misaligned. And, in 2026, model classification is no longer just about assigning a tier, but rather about understanding data lineage, use case evolution, interdependencies, and governance accountability in a decentralized, AI-driven environment. We recently spoke with Mark Longman, Director of Analytics and Regulatory Technology, and here are some of his thoughts around five blind spots risk and compliance leaders should consider addressing now. 1. The “Set It and Forget It” Mentality The Blind Spot Model classification frameworks are often designed during a regulatory remediation effort or inventory modernization initiative. Once documented and approved, they can remain largely unchanged for years. However, model risk management is an ongoing process. “There’s really no sort of one and done when it comes to model risk management,” said Longman. Why It Matters Classification is not merely descriptive, it’s prescriptive. It drives the depth of validation, the frequency of monitoring, the intensity of governance oversight and the level of senior management visibility. As Longman notes, data fragmentation is compounding the challenge. “There’s data everywhere – internal, cloud, even shadow IT – and it’s tough to get a clear view into the inputs into the models,” he said. When inputs are unclear, tiering becomes inherently subjective and if classification frameworks are not reviewed regularly, governance intensity can become misaligned with real exposure. Therefore, static classification is a growing risk, especially in a world of rapidly expanding AI use cases. In a supervisory environment that continues to scrutinize model definitions, particularly as AI tools proliferate, a dynamic, periodically refreshed classification process can demonstrate institutional vigilance. 2. Assuming Third-Party Models Reduce Governance Accountability The Blind SpotThere is often an implicit belief that vendor-provided models carry less governance burden because they were developed externally. Why It Matters Vendor provided models continue to grow, particularly in AI-driven solutions, but supervisory expectations remain firm. “Third-party models do not diminish the responsibility of the institution for its governance and oversight of the model – whether it’s monitoring, ongoing validation, just evaluating drift model documentation,” Longman said. “The board and senior managers are responsible to make sure that these models are performing as expected and that includes third-party models.” Regulators consistently emphasize that institutions remain responsible for the outcomes produced by models used in their decisioning environments, regardless of origin. If a vendor model influences credit approvals, pricing, fraud decisions, or capital calculations, it directly affects customers, financial performance and compliance exposure. Treating third-party models as inherently lower risk can also distort internal tiering frameworks. When vendor models are under-classified, validation depth and monitoring rigor may be insufficient relative to their true impact. 3. Limited Situational Awareness of Model Interdependencies The Blind Spotfeed multiple downstream models simultaneously. Why It Matters Risk often flows across interdependencies. When upstream models degrade in performance or introduce bias, downstream models inherit that exposure. If multiple material decisions depend on the same data transformation or feature engineering process, concentration risk emerges. Without visibility into these dependencies, tiering assessments may underestimate cumulative risk, and monitoring frameworks may fail to detect systemic vulnerabilities. “There has to be a holistic view of what models are being used for – and really somebody to ensure there’s not that overlap across models,” Longman said. Supervisors are increasingly interested in understanding how model risk propagates through business processes. When institutions cannot articulate how models interact, it raises broader concerns about situational awareness and control effectiveness. Therefore, capturing interdependencies within the classification framework enhances more than documentation. It enables more accurate tiering, more targeted monitoring and more informed governance oversight. 4. Excluding Models Without Defensible Rationale The Blind SpotGray-area tools frequently sit outside formal inventories: rule-based engines, spreadsheet models, scenario calculators, heuristic decision aids, or emerging AI tools used for analysis and summarization. These tools may not neatly fit legacy definitions of a “model,” and so they are sometimes excluded without robust documentation. Why It Matters Regulatory definitions of “model” have broadened over time. What creates risk is the absence of defensible reasoning and documentation. Longman describes the risk clearly: “Some [teams] are deploying AI solutions that are sort of unbeknownst to the model risk management community – and almost creating what you might think of as a shadow model inventory.” Without visibility, institutions cannot confidently characterize use, trace inputs, or assign appropriate tiers, according to Longman. It also undermines the credibility of the official inventory during examinations. A well-governed program can articulate why certain tools fall outside model risk management scope, referencing documented criteria aligned with regulatory guidance. Without that evidence, exclusions can appear arbitrary, suggesting gaps in oversight. 5. Inconsistent or Subjective Classification Frameworks The Blind SpotAs inventories scale and governance teams expand, classification decisions are often distributed across reviewers. Over time, discrepancies can emerge. Why It Matters Inconsistency undermines both risk management and regulatory confidence. If two models with comparable use cases and impact profiles are assigned different tiers without clear justification, it signals that the framework is not being applied uniformly. AI adds even more complexity. When it comes to emerging AI model governance versus traditional model governance, there’s a lot to unpack, says Longman: “The AI models themselves are a lot more complicated than your traditional logistic or multiple regression models. The data, the prompting, you need to monitor the prompts that the LLMs for example are responding to and you need to make sure you can have what you may think of as prompt drift,” Longman said. As frameworks evolve, particularly to incorporate AI, automation, and new regulatory interpretations, institutions must ensure that changes are cascaded across the entire inventory. Partial updates or selective reclassification introduce fragmentation. Longman recommends formalizing classification through a structured decision tree embedded in policy to ensure consistent outcomes across business units. Beyond clear documentation, a strong classification program is applied consistently, measured objectively, and periodically reassessed across the full portfolio. BONUS – 6. Elevating Classification with Data-Level Visibility Some institutions are extending classification discipline beyond models to the data layer itself. Longman describes organizations that maintain not only a model inventory, but a data inventory, mapping variables to the models they influence. This approach allows institutions to quickly assess downstream effects when operational or environmental changes occur including system updates or even natural disasters affecting payment behavior. In an AI-driven environment, traceability may become a competitive differentiator. Conclusion Model classification is foundational. It determines how risk is measured, monitored, escalated, and reported. In a rapidly evolving regulatory and technological environment, it cannot remain static. Institutions that invest now in transparency, consistency, and data-level visibility will not only reduce supervisory friction – they will build a governance framework capable of supporting the next generation of AI-enabled decisioning. Learn more

by Stefani Wendel 4 min read March 20, 2026

Explore the public sector trends for 2026 shaping digital services, workforce resilience, and citizen trust for better governance.

by Rachel Alfred 4 min read February 24, 2026

  Experian Verify is redefining how lenders streamline income and employment verification; a value clearly reflected in Marcus Bontrager’s experience at Freedom Mortgage. With access to the second-largest instant payroll network in the U.S., Experian Verify connects lenders to millions of unique employer records, including those sourced through Experian Employer Services clients, delivering instant results at scale. This reach enables lenders to reduce manual processes, accelerate loan decisions, and enhance the borrower experience from the very first touchpoint. Unlike traditional verification providers, Experian Verify offers transparent, value-driven pricing: it charges only when a consumer is successfully verified, not simply when an employer record is found. As lenders navigate increasing compliance requirements and secondary market expectations, they can also rely on Experian Verify’s FCRA-compliant framework, fully supporting both Fannie Mae and Freddie Mac. Combined with Experian’s industry-leading data governance and quality standards, lenders gain a verification partner they can trust for accuracy, security, and long-term operational efficiency. Perhaps most importantly, Experian Verify delivers 100% U.S. workforce coverage through its flexible, automated waterfall: instant verification, consumer-permissioned verification, and research verification. This multilayered approach ensures lenders meet every borrower where they are, whether they’re connected to a large payroll provider, a smaller employer, or require additional document-based validation. As Marcus highlights in the video, this comprehensive and configurable design empowers lenders to build verification workflows that truly fit their business needs while enhancing speed, completeness, and borrower satisfaction. Explore Experian Verify

by Ted Wentzel 4 min read February 20, 2026