Loading...

What Is Open Banking?

by Laura Burrows 4 min read April 25, 2024

Open banking is revolutionizing the financial services industry by encouraging a shift from a closed model to one with greater transparency, competition, and innovation. But what does this mean for financial institutions, and how can you adapt to this new landscape, balancing opportunity against risk?

In this article, we will define open banking, illustrate how it operates, and weigh the challenges and benefits for financial institutions.

What is open banking?

Open banking stands at the forefront of financial innovation, embodying a shift toward a more inclusive, transparent, and consumer-empowered system. At its core, open banking relies on a simple yet powerful premise: it uses consumer-permissioned data to create a networked banking ecosystem that benefits both financial institutions and consumers alike. 

By having secure, standardized access to consumer financial data — granted willingly by the customers themselves — lenders can gain incredibly accurate insights into consumer behavior, enabling them to personalize services and offers like never before.

How does open banking work?

Open banking is driven by Application Programming Interfaces (APIs), which are sets of protocols that allow different software components to communicate with each other and share data seamlessly and securely. In the context of open banking, these APIs enable:

  1. Account Information Services (AIS): These services allow third-party providers (TPPs) to access account information from financial institutions (with customer consent) to provide budgeting and financial planning services.
  2. Payment Initiation Services (PIS): These services permit TPPs to initiate payments on behalf of customers, often offering alternative, faster, or cheaper payment solutions compared to traditional banking methods.

Financial institutions must develop and maintain robust and secure APIs that TPPs can integrate with. This requires significant investment in technology and cybersecurity to protect customer data and financial assets. There must also be clear customer consent procedures and data-sharing agreements between financial institutions and TPPs.

Benefits of open banking

Open banking is poised to create a wave of innovation in the financial sector. One of the most significant benefits is the ability to gain a more comprehensive view of a consumer’s financial situation. With a deeper view of consumer cashflow data and access to actionable insights, you can improve your underwriting strategy, optimize account management and make smarter decisions to safely grow your portfolio. 

Additionally, open banking promotes financial inclusion by enabling financial institutions to offer more tailored products that suit the needs of previously underserved or unbanked populations. This inclusivity can help bridge the gap in financial services, making them accessible to a broader segment of the population.

Furthermore, open banking fosters competition among financial institutions and fintech companies, leading to the development of better products, services, and competitive pricing. This competitive environment not only benefits consumers but also challenges banks to innovate, improve their services, and operate more efficiently.

The collaborative nature of open banking encourages an ecosystem where traditional banks and fintech startups co-create innovative open banking solutions. This synergy can accelerate the pace of digital transformation within the banking sector, leading to the development of cutting-edge technologies and platforms that address specific market gaps or consumer demands. 

Challenges of open banking

While open banking presents a plethora of opportunities, its adoption is not without challenges. Financial institutions must grapple with several hurdles to fully leverage the benefits open banking offers.

One of the most significant challenges is fraud detection in banking and ensuring data security and privacy. The sharing of financial data through APIs necessitates robust cybersecurity measures to protect sensitive information from breaches and fraud. Banks and TPPs alike must invest in advanced security technologies and protocols to safeguard customer data.

Additionally, regulatory compliance poses a considerable challenge. Open banking regulations vary widely across different jurisdictions, requiring banks to adapt their operations to comply with diverse legal frameworks. Staying abreast of evolving regulations and ensuring compliance can be resource-intensive and complex.

Furthermore, customer trust and awareness are crucial to the success of open banking. Many consumers are hesitant to share their financial data due to privacy concerns. Educating customers on the benefits of open banking and the measures taken to ensure their data’s security is essential to overcoming this obstacle.

Despite these challenges, the strategic implementation of open banking can unlock remarkable opportunities for innovation, efficiency, and service enhancement in the financial sector. Banks that can successfully navigate these hurdles and capitalize on the advantages of open banking are likely to emerge as leaders in the new era of financial services.

Our open banking strategy

Our newly introduced open banking solution, Cashflow Attributes, powered by Experian’s proprietary data from millions of U.S. consumers, offers unrivaled categorization and valuable consumer insights. The combination of credit and cashflow data empowers lenders with a deeper understanding of consumers. Furthermore, it harnesses our advanced capabilities to categorize 99% of transaction Demand Deposit Account (DDA) and credit card data, guaranteeing dependable inputs for robust risk assessment, targeted marketing and proactive fraud detection. 

Watch open banking webinar Learn more about Cashflow Attributes

Related Posts

Model inventories are rapidly expanding. AI-enabled tools are entering workflows that were once deterministic and decisioning environments are more interconnected than ever. At the same time, regulatory scrutiny around model risk management continues to intensify. In many institutions, classification determines validation depth, monitoring intensity, and escalation pathways while informing board reporting. If classification is wrong, every downstream control is misaligned. And, in 2026, model classification is no longer just about assigning a tier, but rather about understanding data lineage, use case evolution, interdependencies, and governance accountability in a decentralized, AI-driven environment. We recently spoke with Mark Longman, Director of Analytics and Regulatory Technology, and here are some of his thoughts around five blind spots risk and compliance leaders should consider addressing now. 1. The “Set It and Forget It” Mentality The Blind Spot Model classification frameworks are often designed during a regulatory remediation effort or inventory modernization initiative. Once documented and approved, they can remain largely unchanged for years. However, model risk management is an ongoing process. “There’s really no sort of one and done when it comes to model risk management,” said Longman. Why It Matters Classification is not merely descriptive, it’s prescriptive. It drives the depth of validation, the frequency of monitoring, the intensity of governance oversight and the level of senior management visibility. As Longman notes, data fragmentation is compounding the challenge. “There’s data everywhere – internal, cloud, even shadow IT – and it’s tough to get a clear view into the inputs into the models,” he said. When inputs are unclear, tiering becomes inherently subjective and if classification frameworks are not reviewed regularly, governance intensity can become misaligned with real exposure. Therefore, static classification is a growing risk, especially in a world of rapidly expanding AI use cases. In a supervisory environment that continues to scrutinize model definitions, particularly as AI tools proliferate, a dynamic, periodically refreshed classification process can demonstrate institutional vigilance. 2. Assuming Third-Party Models Reduce Governance Accountability The Blind SpotThere is often an implicit belief that vendor-provided models carry less governance burden because they were developed externally. Why It Matters Vendor provided models continue to grow, particularly in AI-driven solutions, but supervisory expectations remain firm. “Third-party models do not diminish the responsibility of the institution for its governance and oversight of the model – whether it’s monitoring, ongoing validation, just evaluating drift model documentation,” Longman said. “The board and senior managers are responsible to make sure that these models are performing as expected and that includes third-party models.” Regulators consistently emphasize that institutions remain responsible for the outcomes produced by models used in their decisioning environments, regardless of origin. If a vendor model influences credit approvals, pricing, fraud decisions, or capital calculations, it directly affects customers, financial performance and compliance exposure. Treating third-party models as inherently lower risk can also distort internal tiering frameworks. When vendor models are under-classified, validation depth and monitoring rigor may be insufficient relative to their true impact. 3. Limited Situational Awareness of Model Interdependencies The Blind Spotfeed multiple downstream models simultaneously. Why It Matters Risk often flows across interdependencies. When upstream models degrade in performance or introduce bias, downstream models inherit that exposure. If multiple material decisions depend on the same data transformation or feature engineering process, concentration risk emerges. Without visibility into these dependencies, tiering assessments may underestimate cumulative risk, and monitoring frameworks may fail to detect systemic vulnerabilities. “There has to be a holistic view of what models are being used for – and really somebody to ensure there’s not that overlap across models,” Longman said. Supervisors are increasingly interested in understanding how model risk propagates through business processes. When institutions cannot articulate how models interact, it raises broader concerns about situational awareness and control effectiveness. Therefore, capturing interdependencies within the classification framework enhances more than documentation. It enables more accurate tiering, more targeted monitoring and more informed governance oversight. 4. Excluding Models Without Defensible Rationale The Blind SpotGray-area tools frequently sit outside formal inventories: rule-based engines, spreadsheet models, scenario calculators, heuristic decision aids, or emerging AI tools used for analysis and summarization. These tools may not neatly fit legacy definitions of a “model,” and so they are sometimes excluded without robust documentation. Why It Matters Regulatory definitions of “model” have broadened over time. What creates risk is the absence of defensible reasoning and documentation. Longman describes the risk clearly: “Some [teams] are deploying AI solutions that are sort of unbeknownst to the model risk management community – and almost creating what you might think of as a shadow model inventory.” Without visibility, institutions cannot confidently characterize use, trace inputs, or assign appropriate tiers, according to Longman. It also undermines the credibility of the official inventory during examinations. A well-governed program can articulate why certain tools fall outside model risk management scope, referencing documented criteria aligned with regulatory guidance. Without that evidence, exclusions can appear arbitrary, suggesting gaps in oversight. 5. Inconsistent or Subjective Classification Frameworks The Blind SpotAs inventories scale and governance teams expand, classification decisions are often distributed across reviewers. Over time, discrepancies can emerge. Why It Matters Inconsistency undermines both risk management and regulatory confidence. If two models with comparable use cases and impact profiles are assigned different tiers without clear justification, it signals that the framework is not being applied uniformly. AI adds even more complexity. When it comes to emerging AI model governance versus traditional model governance, there’s a lot to unpack, says Longman: “The AI models themselves are a lot more complicated than your traditional logistic or multiple regression models. The data, the prompting, you need to monitor the prompts that the LLMs for example are responding to and you need to make sure you can have what you may think of as prompt drift,” Longman said. As frameworks evolve, particularly to incorporate AI, automation, and new regulatory interpretations, institutions must ensure that changes are cascaded across the entire inventory. Partial updates or selective reclassification introduce fragmentation. Longman recommends formalizing classification through a structured decision tree embedded in policy to ensure consistent outcomes across business units. Beyond clear documentation, a strong classification program is applied consistently, measured objectively, and periodically reassessed across the full portfolio. BONUS – 6. Elevating Classification with Data-Level Visibility Some institutions are extending classification discipline beyond models to the data layer itself. Longman describes organizations that maintain not only a model inventory, but a data inventory, mapping variables to the models they influence. This approach allows institutions to quickly assess downstream effects when operational or environmental changes occur including system updates or even natural disasters affecting payment behavior. In an AI-driven environment, traceability may become a competitive differentiator. Conclusion Model classification is foundational. It determines how risk is measured, monitored, escalated, and reported. In a rapidly evolving regulatory and technological environment, it cannot remain static. Institutions that invest now in transparency, consistency, and data-level visibility will not only reduce supervisory friction – they will build a governance framework capable of supporting the next generation of AI-enabled decisioning. Learn more

by Stefani Wendel 4 min read March 20, 2026

Since 1996, The Internal Revenue Service (IRS) has issued more than 27 million individual taxpayer identification numbers (ITINs) –⁠ a 9-digit number used by individuals who are required to file or report taxes in the United States but are not eligible to obtain a Social Security number (SSN). Across the country, ITIN holders are actively contributing to their communities and the U.S. financial system. They pay bills, build businesses, contribute billions in taxes and manage their finances responsibly. Yet despite their clear engagement, many remain underrepresented within traditional lending models.  Lenders have a meaningful opportunity to bridge the gap between intention and impact. By rethinking how ITIN consumers are evaluated and supported, financial institutions can: Reduce barriers that have historically held capable borrowers back Build products that reflect real borrower needs Foster trust and strengthen community relationships Drive sustainable, responsible growth Our latest white paper takes a more holistic look at ITIN consumers, highlighting their credit behaviors, performance patterns and long-term growth potential. The findings reveal a population that is not only financially engaged, but also demonstrating signs of ongoing stability and mobility. A few takeaways include: ITIN holders maintain a lower debt-to-income ratio than SSN consumers. ITIN holders exhibit fewer derogatory accounts (180–⁠400 days past due). After 12 months, 76.9% of ITIN holders remained current on trades, a rate 15% higher than SSN consumers. With deeper insight into this segment, lenders can make more informed, inclusive decisions. Read the full white paper to uncover the trends and opportunities shaping the future of ITIN lending. Download white paper

by Theresa Nguyen 4 min read February 2, 2026

Unlock the future of fintech by exploring how alternative data is reshaping decision-making and growth strategies.

by Laura Burrows 4 min read January 12, 2026