Artificial Intelligence (AI) offers people and companies many advantages, and we interact with it every day. From the technology we use to do simple things like heating and cooling our homes, to more advanced tools that map potential disease outbreaks across the globe.
AI is also being used more and more in the financial services sector – from matching new customers with the right loan and terms to assisting with transactions in real-time online. In a recent study, we found two-thirds of businesses surveyed globally are using AI to help manage their businesses today. More businesses are keen to use AI but are challenged to fulfill requirements for decision explainability – a must-do for ensuring consumers are treated fairly.
The history of AI
AI stems from the realization of the potential of computation. The father of theoretical computer science and AI, Alan Turing, introduced a theoretical mathematical model of computation – aptly named the Turing Machine – in 1936. He described this machine as being capable of computing anything computable. By 1950, his work posed the question “Can machines think?” He introduced the Turing Test, still in use today to subjectively evaluate whether a machine is intelligent based on its ability to have a conversation. Six years later, in 1956, prominent computer scientists proposed the famous Dartmouth Summer Project. Advanced concepts were introduced and discussed and the term “artificial intelligence” was first coined.
Over the following two decades, AI flourished. Computers became not only faster, cheaper, and more accessible, but they were progressively able to store more information. Meanwhile, machine learning algorithms continued to improve, getting the interest of experts in different fields and industries and taking the realm of artificial intelligence to a tipping point in the early ’80s. Back then, John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn from experience. Meanwhile, Edward Feigenbaum introduced expert systems which mimicked the decision-making process of a human expert, allowing the program to ask an expert in a field how to respond in a given situation and to learn from it.
How can AI benefit both businesses and consumers?
Following these early milestones, the advanced analytics sector has experienced explosive growth – with AI impacting many aspects of our lives today. While most people have come to realize that AI can be beneficial, even since the early days, there have been many different views on how those involved in programming the algorithms must take the necessary steps to prevent AI from reinforcing stereotypes, widening wealth and educational gaps, or providing incorrect answers at critical junctures such as in a medical setting.
As an example of what not to do: a famous language model was trained using 8 million pages sourced directly from the web. So, implicit in this model are the preconceptions and biases included in its training data. In this case, it led to a model with a trend towards greater male bias in more senior, higher-paying jobs.
How to determine fairness in AI models
So how can we ensure that the use of AI does not reinforce societal racism, sexism, or other stereotypes? That leads us to define fairness. It’s the impartial and just treatment of people without favoritism or discrimination; when no unjustified distinctions occur based on groups, classes, or other categories to which they are perceived to belong.
But, within the world of AI, there are varying approaches to fairness associated with different metrics to evaluate and adopt this sought-after algorithmic fairness. Any solution requires defining dimensions of fairness, but realistically, it’s extremely hard to capture all these very sensitive variables and risky to store and process them. To truly determine if an AI system is fair requires an enormous amount of data and expertise.
Additionally, promoting fairness requires an approach across the entire data science life cycle and modeling life cycle. All areas must be considered from the approach to data collection to ongoing evaluation of decisions. And, while fairness in AI is not ‘once and done’ or easily solved, the good news is that it is an area of great focus for regulators, academics, and data and analytics industry experts, like our peers at Experian.
The growing importance of transparency and explainability
Models generally compute calculations that are complex and involve more dimensions than we can directly comprehend. Given this processing step from model-input-to-model-output is unclear, it leads to questions around how a model has come to a decision. Importantly, how can one be sure that the model is behaving as expected?
There are different ways to address explainability. One includes an understanding of how different inputs of a model affect its outputs. Shapley values, introduced by Nobel prize winner Lloyd Shapely, consider an aggregate of marginal contributions for all possible combinations.
Another technique involves explaining the behavior of a decision by identifying model constants verse variables to extract what drove a decision and how. Yet another method uses counterfactual explanations, identifying the precise boundary where a decision changes. This method is easy to communicate since it involves statements such as if X had not occurred, Y would not have happened.
As in the case of fairness, there’s an on-going dialogue around explainability, underpinned by current and yet to emerge new techniques that maintain model accuracy and improve explainability.
Artificial intelligence is past its infancy stage. It’s already had an impact on our daily lives and is becoming increasingly ubiquitous. Fairness, along with a transparent and explainable approach are key ingredients to help this field continue its transition to maturity.