Loading...

Changes in US Vehicles in Operation: Q1 2019

by Guest Contributor 1 min read June 26, 2019

Infographic detailing vehicles in operation Q1 2019

Related Posts

Model inventories are rapidly expanding. AI-enabled tools are entering workflows that were once deterministic and decisioning environments are more interconnected than ever. At the same time, regulatory scrutiny around model risk management continues to intensify. In many institutions, classification determines validation depth, monitoring intensity, and escalation pathways while informing board reporting. If classification is wrong, every downstream control is misaligned. And, in 2026, model classification is no longer just about assigning a tier, but rather about understanding data lineage, use case evolution, interdependencies, and governance accountability in a decentralized, AI-driven environment. We recently spoke with Mark Longman, Director of Analytics and Regulatory Technology, and here are some of his thoughts around five blind spots risk and compliance leaders should consider addressing now. 1. The “Set It and Forget It” Mentality The Blind Spot Model classification frameworks are often designed during a regulatory remediation effort or inventory modernization initiative. Once documented and approved, they can remain largely unchanged for years. However, model risk management is an ongoing process. “There’s really no sort of one and done when it comes to model risk management,” said Longman. Why It Matters Classification is not merely descriptive, it’s prescriptive. It drives the depth of validation, the frequency of monitoring, the intensity of governance oversight and the level of senior management visibility. As Longman notes, data fragmentation is compounding the challenge. “There’s data everywhere – internal, cloud, even shadow IT – and it’s tough to get a clear view into the inputs into the models,” he said. When inputs are unclear, tiering becomes inherently subjective and if classification frameworks are not reviewed regularly, governance intensity can become misaligned with real exposure. Therefore, static classification is a growing risk, especially in a world of rapidly expanding AI use cases. In a supervisory environment that continues to scrutinize model definitions, particularly as AI tools proliferate, a dynamic, periodically refreshed classification process can demonstrate institutional vigilance. 2. Assuming Third-Party Models Reduce Governance Accountability The Blind SpotThere is often an implicit belief that vendor-provided models carry less governance burden because they were developed externally. Why It Matters Vendor provided models continue to grow, particularly in AI-driven solutions, but supervisory expectations remain firm. “Third-party models do not diminish the responsibility of the institution for its governance and oversight of the model – whether it’s monitoring, ongoing validation, just evaluating drift model documentation,” Longman said. “The board and senior managers are responsible to make sure that these models are performing as expected and that includes third-party models.” Regulators consistently emphasize that institutions remain responsible for the outcomes produced by models used in their decisioning environments, regardless of origin. If a vendor model influences credit approvals, pricing, fraud decisions, or capital calculations, it directly affects customers, financial performance and compliance exposure. Treating third-party models as inherently lower risk can also distort internal tiering frameworks. When vendor models are under-classified, validation depth and monitoring rigor may be insufficient relative to their true impact. 3. Limited Situational Awareness of Model Interdependencies The Blind Spotfeed multiple downstream models simultaneously. Why It Matters Risk often flows across interdependencies. When upstream models degrade in performance or introduce bias, downstream models inherit that exposure. If multiple material decisions depend on the same data transformation or feature engineering process, concentration risk emerges. Without visibility into these dependencies, tiering assessments may underestimate cumulative risk, and monitoring frameworks may fail to detect systemic vulnerabilities. “There has to be a holistic view of what models are being used for – and really somebody to ensure there’s not that overlap across models,” Longman said. Supervisors are increasingly interested in understanding how model risk propagates through business processes. When institutions cannot articulate how models interact, it raises broader concerns about situational awareness and control effectiveness. Therefore, capturing interdependencies within the classification framework enhances more than documentation. It enables more accurate tiering, more targeted monitoring and more informed governance oversight. 4. Excluding Models Without Defensible Rationale The Blind SpotGray-area tools frequently sit outside formal inventories: rule-based engines, spreadsheet models, scenario calculators, heuristic decision aids, or emerging AI tools used for analysis and summarization. These tools may not neatly fit legacy definitions of a “model,” and so they are sometimes excluded without robust documentation. Why It Matters Regulatory definitions of “model” have broadened over time. What creates risk is the absence of defensible reasoning and documentation. Longman describes the risk clearly: “Some [teams] are deploying AI solutions that are sort of unbeknownst to the model risk management community – and almost creating what you might think of as a shadow model inventory.” Without visibility, institutions cannot confidently characterize use, trace inputs, or assign appropriate tiers, according to Longman. It also undermines the credibility of the official inventory during examinations. A well-governed program can articulate why certain tools fall outside model risk management scope, referencing documented criteria aligned with regulatory guidance. Without that evidence, exclusions can appear arbitrary, suggesting gaps in oversight. 5. Inconsistent or Subjective Classification Frameworks The Blind SpotAs inventories scale and governance teams expand, classification decisions are often distributed across reviewers. Over time, discrepancies can emerge. Why It Matters Inconsistency undermines both risk management and regulatory confidence. If two models with comparable use cases and impact profiles are assigned different tiers without clear justification, it signals that the framework is not being applied uniformly. AI adds even more complexity. When it comes to emerging AI model governance versus traditional model governance, there’s a lot to unpack, says Longman: “The AI models themselves are a lot more complicated than your traditional logistic or multiple regression models. The data, the prompting, you need to monitor the prompts that the LLMs for example are responding to and you need to make sure you can have what you may think of as prompt drift,” Longman said. As frameworks evolve, particularly to incorporate AI, automation, and new regulatory interpretations, institutions must ensure that changes are cascaded across the entire inventory. Partial updates or selective reclassification introduce fragmentation. Longman recommends formalizing classification through a structured decision tree embedded in policy to ensure consistent outcomes across business units. Beyond clear documentation, a strong classification program is applied consistently, measured objectively, and periodically reassessed across the full portfolio. BONUS – 6. Elevating Classification with Data-Level Visibility Some institutions are extending classification discipline beyond models to the data layer itself. Longman describes organizations that maintain not only a model inventory, but a data inventory, mapping variables to the models they influence. This approach allows institutions to quickly assess downstream effects when operational or environmental changes occur including system updates or even natural disasters affecting payment behavior. In an AI-driven environment, traceability may become a competitive differentiator. Conclusion Model classification is foundational. It determines how risk is measured, monitored, escalated, and reported. In a rapidly evolving regulatory and technological environment, it cannot remain static. Institutions that invest now in transparency, consistency, and data-level visibility will not only reduce supervisory friction – they will build a governance framework capable of supporting the next generation of AI-enabled decisioning. Learn more

by Stefani Wendel 1 min read March 20, 2026

We’re excited to share that Experian Automotive’s client Hamlin & Associates and Honda World have been named winners of the 2025 Automotive News / Ad Age Global Automotive Marketing Award for Best Use of Data — an honor that celebrates meaningful, measurable impact. Why this work stood out Hamlin & Associates' client, Honda World of Louisville, KY, faced a clear challenge: re-engage customers and recover declining service revenue, particularly for vehicles with open recalls. Hamlin & Associates approached the problem with a simple belief: clean, accurate data leads to better outcomes for customers and dealerships alike. They began with data hygiene, then enriched each vehicle record using Experian Automotive’s Recall VIN Verification solution. This created a precise view of who owned which vehicles, which recalls were still open, and when repairs could be completed — all essential to a smooth customer experience. A smarter, more human outreach strategy Over the course of a year, Hamlin delivered four waves of direct mail designed to cut through the noise. Each letter: Spoke directly to the customer Highlighted their specific vehicle Explained the recall in clear language Showed how easy it was to book a free repair The result was a data-driven communication plan grounded in trust and simplicity — and it worked Results that show what’s possible 26% response rate 1,953 repair orders $811,834 in service revenue Thousands of customers are now driving safer vehicles These outcomes reflect more than campaign performance. They demonstrate what happens when dealers, agencies, and data partners collaborate to guide individuals toward safer, more informed decisions. In their words John Hamlin, Hamlin & Associates:“Clean data builds trust. When we combine our hygiene process with Experian Automotive insights, dealers uncover opportunities they never knew they had.” Mike Porro, Honda World:“They keep it simple, and data-driven ‘simple’ gets done. We follow the process, train our staff, and see the results.” Looking ahead We’re proud to celebrate Hamlin & Associates and Honda World for showing what’s achievable when data, insight, and clear communication come together. Their work helps people stay safe, strengthens customer relationships, and sets a new standard for recall outreach. Congratulations to the entire team — and here’s to helping even more drivers move forward Learn more about how to enrich your first-party data with Recall VIN Verification insights!  

by Trish Radaj 1 min read December 18, 2025

From the vehicles we drive to the way we purchase them, everything in the automotive industry is evolving as new technologies, shifting incentives, and changing consumer expectations continue to develop. As electrified vehicles continue to grow their presence on the road, Experian’s Automotive Market Trends Report: Q3 2025 took a deep dive into this segment and found that 5.5 million electric vehicles (EVs) and 11.7 million hybrids were in operation this quarter. Furthermore, data through the third quarter of this year found that 73.8% of EV owners returning to market replaced their EV with another EV and only 16.5% switched to a gas-powered vehicle. The significant EV loyalty among consumers signals that the ownership experience is delivering on core expectations. While some owners continued to opt for an EV because they’ve grown accustomed to certain conveniences such as charging stations at home or workplace to avoid traditional fueling and the perks of lower maintenance needs, others took advantage of the EV tax credits before they expired at the end of September. However, as these motivations shift, it will be important to monitor how the EV market unfolds over the next six months. Notably, 11.7% of gas-powered vehicle owners replaced their vehicle with a gas-hybrid vehicle this quarter, suggesting that hybrids are acting as an effective bridge toward deeper electrification. In fact, drivers may see hybrids as the ‘happy medium’ vehicle that offers improved fuel efficiency without requiring full reliance on charging infrastructure. Why this matters for the aftermarket As the majority of consumers replace their EVs with another one and some switch their gas-powered vehicle for an electrified one, these trends signal potential long-term commitment to alternative fuel segments. This is important to monitor for aftermarket professionals as the EV service volume continues to grow, requiring different parts and technician training. With consumers increasingly turning to the aftermarket for cost-effective support, professionals who adapt to diverse powertrains will be best positioned to navigate this evolving wave of post-warranty demand. To learn more about EVs and other vehicle market trends, view the full Automotive Market Trends Report: Q3 2025 presentation on demand.

by John Howard 1 min read December 18, 2025

Subscribe to our Auto blog

Enter your name and email for the latest updates.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.