home
navigate_next
Blog
navigate_next
Artificial Intelligence

What is Explainable AI, and Why Does Your Business Need It?

What is Explainable AI, and Why Does Your Business Need It?

Artificial Intelligence (AI) has long been considered to be an unsolvable black box. As the field of AI matures and becomes more accessible, there is a growing need to understand what is driving the decision-making of AI models. In cases where the inputs have a high level of complexity, it is difficult for humans to understand the factors that influence the output. Explainable Artificial Intelligence (XAI) is a result of the desire to understand these outputs, and it is a series of methods and processes that can be used to describe an AI model and its potential outcomes.

Why we need Explainable AI

While adding explainable functionality can increase the workload on development teams, its numerous advantages can provide a more robust model for the future. Bias in models - whether it is gender, race, age or location - has always been a risk when training models. Moreover, the performance of an algorithm can drift and degrade over time, leading to unexpected results. As a business it is crucial to understand model behavior, first to identify when unwanted biases or model drift has occurred, and secondly so that it can be easily rectified. 

Ethical / Responsible AI is becoming a large focus for researchers and companies alike, as it is no longer just about how good the model can be, but also ensuring that it conforms with legal and ethical requirements. The 2022 IBM Institute for Business Value study on AI Ethics in Action found that being able to provide trustworthy AI models is becoming a strategic differentiator for organisations providing services and products backed by Artifical Intelligence. From all of the respondents to the survey, 75% of them indicated that they believe that ethics is a source of competitive differentiation. This is backed by the fact that 79% of CEO’s would be willing to adopt ethical AI practices into their models, which is a massive increase from 20% in 2018. To help organisations adopt AI responsibly, ethical principles such as trust and transparency through XAI must be an essential design factor.

Transparency is crucial to create trust between the model and the end user. IBM’s AI Ethics survey found that 85% of IT professionals agree that consumers are more likely to choose a company that can show exactly how their models are built and how they work. Therefore, being able to provide a human-centred explanation of what factors have contributed to their output can help foster that sense of trust, which will lead to a more productive use of AI tools and an increase in consumer uptake.

Generally this human explanation will show what inputs have had the biggest effect on the output. This can be incredibly helpful in businesses where AI models are making decisions that have a direct impact on their customers. As an example, if a loan company rejects an application for a loan, it is helpful to explain the reason for the rejection rather than simply providing an emotionless computer output.

An AI model needs to be trained for a high level of accuracy, and that can be time consuming and resource intensive. Complex AI models like neural networks can be hard to understand even for an expert, and tuning model parameters to achieve that last percentage of accuracy can sometimes seem impossible. By showing what key decisions were made to produce a specific output, XAI can help developers optimise and debug their model more efficiently, thereby saving time and resources.

How does Explainable AI work

The work of Explainable Artificial Intelligence is to make complex models simpler. This can be done at a global or a local level. Global Explainability aims to give a general overview of how the model is performing and what features are holistically important across the whole dataset. A survey of housing prices within a city could show that on a global level, larger houses are generally worth more. However, what happens when there is a small house with a price tag that is more expensive than a larger counterpart. This gives rise to Local Explainability, which aims to explain a particular result that may not agree with the global consensus. In the case of the housing example, a local explanation would show that the house was close to the city and therefore had a higher value.

Global explainability is achieved by applying a simpler model, such as a decision tree or linear regression model, to a more complex model like a neural network and then training it to mimic that model's outputs. These are known as Global Surrogate models. Generally these can be understood a lot easier and it is easy to see what is globally important to achieving an output. Although these models end up performing in a similar manner to their complex equivalents, they do not have the same level of accuracy and therefore couldn’t be a direct replacement. 

While Global Explainability is good at providing a holistic view of a model, Local explainability is considered a more effective and accurate tool for XAI. At its core, Local Explainability uses feature analysis methods to provide a quantitative number which highlights a features importance to a model's output. 

The first of these methods is called Local Interpretable Model Agnostic Explanations or LIME for short. It works by manipulating the original dataset around a certain input value to create a new smaller dataset that can be fed through the original model to create a new set of outputs. Once fitted, LIME analyzes how close the new sampled observation was to the original sample of interest by fitting a more interpretable model, such as a linear regression model. By identifying what features had the biggest change for that particular sample, the model is able to build a picture of what features have the biggest impact on a particular output. For example if the house pricing model gives the following output for these inputs:

Then it could be useful to understand what feature had the biggest influence on that house price. A LIME model would take that particular input set and 'perturb' it, which means it would change the values and create a new dataset of x number of values distributed around it. The new house size dataset could have values from 100m2 to 200m2. This creates a new range of house prices that LIME can then use to identify whether house size, land size, or distance to city was the most important feature.

Shapley Additive Explanations (SHAP) is another popular method for local explanation, although it can also be used for global explanations, which is based on Shapley values and game theory. Historically these values have been used to calculate the individual contribution of a team member to a game. In theory, it would be easy to just remove each player in sequence and then calculate how the team goes without them. The problem with this is that it doesn’t take into account the relationships between players and how they perform when paired with certain individuals. This is the same with features in an ML dataset. In the above housing price example, House Size and Land Size could produce fairly standard results, but when both House Size and Distance to City are considered together this could generate a much better result which in this case is house price. A SHAP algorithm divides all available features into all possible unique subsets, then calculates the performance of each feature within each subset. To get a holistic overview of any features’ contribution to a given output, it is necessary to average all performances.

The above methods of LIME and SHAP values are just two examples of local explainability methods; a number of other methods of local explainability, such as partial dependence plots, accumulated local effects, and integrated graphs, are also available. At their core they all try to explain the local importance of individual features, yet all will be more suited for different problems. No matter what method is chosen, it is crucial to start embedding local explainability in any AI project.

Industry Solutions

As legislation is getting passed around the world regarding AI governance and ethics, it is only a matter of time before Explainable AI is a requirement for many large scale enterprise projects. Not only that, but the obligation to ensure Responsible AI practices are followed has seen industry leaders like Google and IBM are incorporating XAI methods directly into their products.

One of the most exciting developments in this area has been the release of an open source API, AI Explainability 360, for explaining complex models by IBM's research centre in 2019. Now, these have been integrated into its Cloud Pak for Data platform so that model outputs can be explained seamlessly. Due to the API's open source nature, the number of explainability methods is only likely to increase as the community contributes and develops future explainability tools.

In addition to offering AI and machine learning services, Google has XAI tools such as Shapley Values built into their AutoML Tables, Vertex AI, and Notebooks that detect bias, drift, and other errors during the design phase. They also offer a 'What if' tool that allows you to experiment with different parameters so you can easily optimise your model. Once a model is in production, they offer a panel that lets you analyse a model’s output with a source of ground truth making it easy to follow performance. 

TL;DR

In conclusion, Explainable AI is a field of Artificial Intelligence that is concerned with making complex models simpler so that they are more easily understood by humans. There are many reasons why we need Explainable AI, including the fact that it can help us debug and optimise our models more efficiently, as well as fostering trust between the model and the end user.

As legislation is beginning to prioritise ethical AI practices, it’s important for your business to be ahead of the game. This is where we can come in - Spark 64 is an artificial intelligence agency on a mission to make AI more accessible. We specialise in language, vision and data to accelerate your business, streamline processes, and uncover meaningful insights through data.

By Morgan Davies, Full Stack AI Developer, and Erica Fogarty, Marketing Coordinator

arrow_back
Back to blog