Loading...

Cloud Computing During COVID-19 and Beyond

by Kelly Nguyen 3 min read November 18, 2020

The global pandemic has created major shifts in the ways companies operate and innovate. For many organizations, a heavy reliance on cloud applications and cloud services has become the new normal, with cloud applications being praised as “an unsung hero” for accommodating a world in crisis, as stated in an article from the Channel Company.

However, cloud computing isn’t just for consumers and employees working from home. In the last few years, cloud computing has changed the way organizations and businesses operate. Cloud-based solutions offer the flexibility, reduced operational costs and fast deployment that can transform the ways traditional companies operate. In fact, migrating services and software to the cloud has become one of the next steps to a successful digital transformation.

What is cloud computing?
Simply put – it’s the ability to run applications or software from remote servers, hosted by external providers, also known as infrastructure-as-a-service (IaaS). Data collected from cloud computing is stored online and is accessed via the Internet. According to a study by CommVault, more than 93% of business leaders say that they are moving at least some of their processes to the cloud, and a majority are already cloud-only or plan to completely migrate.

In a recent Forrester blog titled ‘Troubled Times Test Traditional Tech Titans,’ Glenn O’Donnell, Vice President, Research Director at Forrester highlights that “as we saw in prior economic crises, the developments that carried business through the crisis remained in place. As many companies shift their infrastructure to cloud services through this pandemic, those migrated systems will almost certainly remain in the cloud.”

In short, cloud computing is the new wave – now more than ever during a crisis. But what are the benefits of moving to the cloud?

  1. Flexibility
    Cloud computing offers the flexibility that companies need to adjust to fluctuating business environments. During periods of unexpected growth or slow growth, companies can expand to add or remove storage space, applications, or features and scale as needed. Businesses will only have to pay for the resources that they need. In a pandemic, having this flexibility and easy access is the key to adjusting to volatile market conditions.
  2. Reduced operational costs
    Companies (big or small) that want to reduce costs from running a data center will find that moving to the cloud is extremely cost-effective. Cloud computing eliminates the high cost of hardware, IT resources and maintaining internal and on-premise data systems. Cloud-based solutions can also help organizations modernize their IT infrastructures and automate their processes. By migrating to the cloud, companies will be able to save substantial capital costs and see a higher return on investment – while maintaining efficiency.
  3. Faster deployment
    With the cloud, companies get the ability to deploy and launch programs and applications quickly and seamlessly. Programs can be deployed in days as opposed to weeks – so that businesses can operate faster and more efficiently than ever. During a pandemic, faster deployment speeds can help organizations accommodate, make updates to software and pivot quickly to changing market conditions.

Flexible, scalable, and cost-effective solutions will be the keys to thriving during and after a pandemic. That’s why we’ve enhanced a variety of our solutions to be cloud-based – to help your organization adapt to today’s changing customer needs. Solutions like our Attribute Toolbox are now officially on the cloud, to help your organizations make better, faster, and more effective decisions.

Learn More

Related Posts

Model inventories are rapidly expanding. AI-enabled tools are entering workflows that were once deterministic and decisioning environments are more interconnected than ever. At the same time, regulatory scrutiny around model risk management continues to intensify. In many institutions, classification determines validation depth, monitoring intensity, and escalation pathways while informing board reporting. If classification is wrong, every downstream control is misaligned. And, in 2026, model classification is no longer just about assigning a tier, but rather about understanding data lineage, use case evolution, interdependencies, and governance accountability in a decentralized, AI-driven environment. We recently spoke with Mark Longman, Director of Analytics and Regulatory Technology, and here are some of his thoughts around five blind spots risk and compliance leaders should consider addressing now. 1. The “Set It and Forget It” Mentality The Blind Spot Model classification frameworks are often designed during a regulatory remediation effort or inventory modernization initiative. Once documented and approved, they can remain largely unchanged for years. However, model risk management is an ongoing process. “There’s really no sort of one and done when it comes to model risk management,” said Longman. Why It Matters Classification is not merely descriptive, it’s prescriptive. It drives the depth of validation, the frequency of monitoring, the intensity of governance oversight and the level of senior management visibility. As Longman notes, data fragmentation is compounding the challenge. “There’s data everywhere – internal, cloud, even shadow IT – and it’s tough to get a clear view into the inputs into the models,” he said. When inputs are unclear, tiering becomes inherently subjective and if classification frameworks are not reviewed regularly, governance intensity can become misaligned with real exposure. Therefore, static classification is a growing risk, especially in a world of rapidly expanding AI use cases. In a supervisory environment that continues to scrutinize model definitions, particularly as AI tools proliferate, a dynamic, periodically refreshed classification process can demonstrate institutional vigilance. 2. Assuming Third-Party Models Reduce Governance Accountability The Blind SpotThere is often an implicit belief that vendor-provided models carry less governance burden because they were developed externally. Why It Matters Vendor provided models continue to grow, particularly in AI-driven solutions, but supervisory expectations remain firm. “Third-party models do not diminish the responsibility of the institution for its governance and oversight of the model – whether it’s monitoring, ongoing validation, just evaluating drift model documentation,” Longman said. “The board and senior managers are responsible to make sure that these models are performing as expected and that includes third-party models.” Regulators consistently emphasize that institutions remain responsible for the outcomes produced by models used in their decisioning environments, regardless of origin. If a vendor model influences credit approvals, pricing, fraud decisions, or capital calculations, it directly affects customers, financial performance and compliance exposure. Treating third-party models as inherently lower risk can also distort internal tiering frameworks. When vendor models are under-classified, validation depth and monitoring rigor may be insufficient relative to their true impact. 3. Limited Situational Awareness of Model Interdependencies The Blind Spotfeed multiple downstream models simultaneously. Why It Matters Risk often flows across interdependencies. When upstream models degrade in performance or introduce bias, downstream models inherit that exposure. If multiple material decisions depend on the same data transformation or feature engineering process, concentration risk emerges. Without visibility into these dependencies, tiering assessments may underestimate cumulative risk, and monitoring frameworks may fail to detect systemic vulnerabilities. “There has to be a holistic view of what models are being used for – and really somebody to ensure there’s not that overlap across models,” Longman said. Supervisors are increasingly interested in understanding how model risk propagates through business processes. When institutions cannot articulate how models interact, it raises broader concerns about situational awareness and control effectiveness. Therefore, capturing interdependencies within the classification framework enhances more than documentation. It enables more accurate tiering, more targeted monitoring and more informed governance oversight. 4. Excluding Models Without Defensible Rationale The Blind SpotGray-area tools frequently sit outside formal inventories: rule-based engines, spreadsheet models, scenario calculators, heuristic decision aids, or emerging AI tools used for analysis and summarization. These tools may not neatly fit legacy definitions of a “model,” and so they are sometimes excluded without robust documentation. Why It Matters Regulatory definitions of “model” have broadened over time. What creates risk is the absence of defensible reasoning and documentation. Longman describes the risk clearly: “Some [teams] are deploying AI solutions that are sort of unbeknownst to the model risk management community – and almost creating what you might think of as a shadow model inventory.” Without visibility, institutions cannot confidently characterize use, trace inputs, or assign appropriate tiers, according to Longman. It also undermines the credibility of the official inventory during examinations. A well-governed program can articulate why certain tools fall outside model risk management scope, referencing documented criteria aligned with regulatory guidance. Without that evidence, exclusions can appear arbitrary, suggesting gaps in oversight. 5. Inconsistent or Subjective Classification Frameworks The Blind SpotAs inventories scale and governance teams expand, classification decisions are often distributed across reviewers. Over time, discrepancies can emerge. Why It Matters Inconsistency undermines both risk management and regulatory confidence. If two models with comparable use cases and impact profiles are assigned different tiers without clear justification, it signals that the framework is not being applied uniformly. AI adds even more complexity. When it comes to emerging AI model governance versus traditional model governance, there’s a lot to unpack, says Longman: “The AI models themselves are a lot more complicated than your traditional logistic or multiple regression models. The data, the prompting, you need to monitor the prompts that the LLMs for example are responding to and you need to make sure you can have what you may think of as prompt drift,” Longman said. As frameworks evolve, particularly to incorporate AI, automation, and new regulatory interpretations, institutions must ensure that changes are cascaded across the entire inventory. Partial updates or selective reclassification introduce fragmentation. Longman recommends formalizing classification through a structured decision tree embedded in policy to ensure consistent outcomes across business units. Beyond clear documentation, a strong classification program is applied consistently, measured objectively, and periodically reassessed across the full portfolio. BONUS – 6. Elevating Classification with Data-Level Visibility Some institutions are extending classification discipline beyond models to the data layer itself. Longman describes organizations that maintain not only a model inventory, but a data inventory, mapping variables to the models they influence. This approach allows institutions to quickly assess downstream effects when operational or environmental changes occur including system updates or even natural disasters affecting payment behavior. In an AI-driven environment, traceability may become a competitive differentiator. Conclusion Model classification is foundational. It determines how risk is measured, monitored, escalated, and reported. In a rapidly evolving regulatory and technological environment, it cannot remain static. Institutions that invest now in transparency, consistency, and data-level visibility will not only reduce supervisory friction – they will build a governance framework capable of supporting the next generation of AI-enabled decisioning. Learn more

by Stefani Wendel 3 min read March 20, 2026

Day 1 of Vision 2025 is in the books – and what a start. From bold keynotes to breakout sessions and networking under the Miami sun, the energy and inspiration were undeniable.  A wave of change: Jeff Softley opens Vision 2025  The day kicked off with a powerful keynote from Jeff Softley, Experian North America CEO, who issued a call to action for the industry: to not just adapt to change, but to lead it.  “It isn’t a ripple – it’s a tidal wave of technology,” Jeff said. “Together we ride this wave with confidence.”  His keynote set the tone for a day centered on innovation and the future of financial services – where technology, insight and trust converge to create lasting impact. Jeff continues this conversation in the latest Experian Exchange episode, where he explores three forces shaping the industry: the rise of AI, the demand for personalized digital experiences and the mission to expand credit access for all.  Turning vision into action: Alex Lintner on agentic AI  Building on Jeff’s message, Alex Lintner, CEO of Experian Software and Technology, took the stage to show how Experian is turning innovation into measurable results. His keynote explored how agentic and advanced AI capabilities are redefining financial services ROI and powering the next generation of the Ascend Platform™.  For a deeper look into how Experian is reshaping the economics of credit and fraud decisioning, read the latest American Banker feature.  Unfiltered insights from “Mr. Wonderful”  The day’s highlight came from Kevin O’Leary, investor, entrepreneur and the always-candid “Mr. Wonderful.” With his trademark wit and honesty, Kevin shared sharp insights on thriving in a disruptive economy, offering candid advice on leadership, risk and opportunity. He even gave attendees a peek behind the Shark Tank curtain, revealing a few surprises and the mindset that drives his bold business decisions.  Breakouts that inspired and informed  The conference floor buzzed with energy as attendees joined breakout sessions on fraud defense, AI-driven personalization, regulatory trends and consumer insights. Sessions highlighted how Experian’s unified value proposition is fueling double-digit growth, how to future-proof credit risk strategies and how data and innovation are redefining customer engagement across the lifecycle.   Hands-on innovation and connection  The Innovation Showcase gave attendees an up-close look at Experian’s latest tools and technologies in action. Meanwhile, friendly competition kept the excitement high through the Vision mobile app leaderboard – with every check-in and connection earning points toward the top spot.  Networking beyond the conference hall walls  As the sun set, Vision 2025 shifted into high gear with unforgettable networking events across Miami – from golf at the Miller Course to art walks, brewery tours and a scenic cruise through Biscayne Bay.   An evening to remember  The day closed with the first-ever Vision Awards Dinner, celebrating standout leaders who are shaping the future of financial services.   Up Next: Day 2  The momentum continues tomorrow as more keynote speakers take the stage. Stay tuned for more insights, innovation, and inspiration from Vision 2025. 

by Sharis Rostamian 3 min read October 7, 2025

Tenant screening fraud is rising, with falsified paystubs and AI-generated documents driving risk. Learn how income and employment verification tools powered by observed data improve fraud detection, reduce costs, and streamline tenant screening.

by Kim Agaton 3 min read September 4, 2025