Artificial intelligence

Explanatory Methods of Artificial Intelligence In Health

Traditionally, most Artificial Intelligence (AI) methods have been considered black boxes to which we give a series of data, and they return a prediction. However, sometimes it is essential to know why our model is making the decisions it is making.

For example, decision-making is a critical point in the medical field since a decision can directly influence people’s health. Therefore, if these AI methods aid in decision-making, it is necessary to know more about how each variable affects the prediction emitted by the model.

It is also helpful to know if our model is biased when making predictions or when an AI model deviates from the standard criteria in certain decisions. After all, when we are training an Artificial Intelligence model, we are trying to discover the patterns that the data follows. If our data has biases caused mainly by people entering the information, our AI model will also learn according to those biases.
How is it possible to know the decisions our model is making if it is a black box? How can we prevent our model from making biased decisions? To resolve these questions, the explicability and interpretability of the models arise. The local and global explicability techniques try to extract information about the decisions made by the Artificial Intelligence models.

Explanatory Methods of Artificial Intelligence

Some models of Artificial Intelligence are interpretable per se. Simple models such as regressions, which in themselves offer us the importance of each variable in the decisions that are made, or decision trees, which by their structure indicate the path of decisions on the different variables that lead to the prediction or decision. Final.
However, in most cases, we will need to use more complex algorithms that are not as transparent as to why a specific conclusion has been reached. These algorithms are called black-box algorithms since their interpretability is practically nil.

Having to use interpretable models can lead to a loss of flexibility when troubleshooting machine learning problems. For this reason, for the explicability of the black box models, the so-called agnostic explanatory methods arise.

These interpretability techniques are alien to the learning model that is being used. Although they do not give us a clear vision of the decisions made by the black-box algorithms, they do provide us with an approach that helps us better understand the problem we are facing. Trying. In turn, the agnostic models of interpretability can be classified into global explicability models and local explicability models.

Models of Global Interpretability of Algorithms

One of the objectives of the interpretability of the models is to explain which variables an algorithm uses to make a decision. The technique called Permutation Importances can be used for this problem. The goal is simple: measure the prediction error of a model before and after permuting the values ​​of each variable. In this way, we can calculate which variables have the most influence on the predictions made by the model. The problem with this method is that we are assuming that there is no dependency between the variables.

A similar method is that of the Partial Dependence Plots. This method of explicability consists of choosing a series of values ​​to evaluate the behavior of a specific variable in the data set. The way to calculate this explicability metric is by setting the selected variable, for all the instances of the set, to each value of a list that we have previously decided and calculating the error difference that we obtain with each one.

With the Partial Dependence Plots model, we can measure the importance of the different values ​​of a variable for the algorithm’s predictions. In this method, we are also assuming that there is no dependency between the variables.

Models of Local Interpretability of Algorithms

Thanks to global explicability methods, we can know how our model behaves in a general way. That is, knowing what variables you are taking into account to make decisions and how the values ​​of those variables affect the predictions in general. However, we frequently want to know what is happening with each prediction that the model is generating and how each variable’s values affect each specific prognosis. Local explanatory models help us solve this problem.

The most widespread local explicability model is that of Shapley values. Shapley values ​​are based on game theory, how much weight each player brings to the entire game. In our case, we try to know how much weight each variable contributes to the prediction made. However, the calculation of Shapley values ​​is computationally costly, although several properties always comply and help optimize the algorithms. These properties are:

  • Efficiency: the sum of the Shapley values ​​is the total value of the game.
  • Symmetry: If two players are equal, their Shapley values ​​are similar.
  • Additivity: If a game can be split in two, the Shapley components can also be broken down.
  • Null player: if a player does not add value to the game, its value is 0.

For example, with the property of additivity, you can decompose a set of classifiers, calculate the Shapley values ​​for each classifier, and, adding the Shapley values ​​obtained for each classifier, receive the final discounts.

Although the Shapley values ​​give us a local view of each prediction, by grouping them, we can observe the global behavior of the model. Grouping the Shapley values ​​obtained in each prediction, we will see the model’s behavior with the different values ​​that each variable has.

In a health case, for example, we have a simple model that predicts whether a person will have a heart attack based on the medical tests that have been carried out. We will indeed observe that, for optimistic predictions (the patient has suffered a cardiac arrest), the Shapley values for high cholesterol values ​​are also very high, and they are low for low cholesterol values.

This situation indicates that the higher the cholesterol values, the more they influence the prediction that the patient will suffer a cardiac arrest. And in the same way, the low Shapley values ​​for the common cholesterol values ​​indicate that, although they also influence the prediction, they have a negative influence. That is, they lower the probability that the patient suffers a cardiac arrest.

Restrictions of Algorithm Interpretability Models

A series of restrictions tend to indicate that the interpretability models are not always optimal and that it will depend on the problem we want to solve. The most notable limitations within the explicability are the explicability-precision balance and the computational cost.

As we have already mentioned, there are more interpretable models, such as regression algorithms or decision trees, but these, in turn, do not have to obtain good results in terms of the predictions they make. It depends on the problem we are dealing with.

If we have the opportunity to use this type of simpler model and obtain good results, we will also be able to have models that, per se, are explainable. However, we want to use more complex and, therefore, much less solvable models in most Artificial Intelligence problems.

This is when the agnostic interpretability models come into play, which, we already know, have certain disadvantages or negative features that we must assume, such as the independence of variables or the computational cost involved in using them.

Agnostic interpretability algorithms are computationally expensive in themselves and also work on previously trained models. We have to add the computational cost involved in preparing a model to the computational cost involved using the explicability algorithms.

It depends on the problem we are dealing with, and this situation will be more or less viable. If, for example, we are dealing with thousands of predictions per minute, or even less, it is practically impossible to obtain the explicability for each of the predictions.

Cases of Explicability of Algorithms in Health

Artificial Intelligence algorithms are based on the knowledge contained in the data to make predictions. For various reasons, generally social, los data may be biased, and therefore AI algorithms learn from these biases. With the explanatory algorithms, we can detect if our models are limited and try to solve them so that these biases are not considered.

In addition, for people outside the field of AI, it can be complex to understand how Artificial Intelligence algorithms work and wonder why they make the decisions they are making or how each variable affects the decision made by the model. In the area of Big Data and AI in Health at the Institute of Knowledge Engineering (IIC), we have found ourselves faced with these situations. In numerous projects, it has been necessary to apply these AI explanatory techniques.

Explanation of The Acceptance of Health Budgets

For example, in the Health area, we have used these techniques to explain to experts in a clinic which variables influence the acceptance of medical treatment budgets. Our objective was to build a model that was capable of predicting whether a budget, made up of demographic variables and a series of medical treatments, will be accepted by the patient or not. One of the most critical aspects of this project was that the reasons why a budget is going to be accepted or rejected could be known. This is where explicability comes in.

The procedure that has been carried out is as follows: first,

  • Build an AI model that fits the problem at hand. One of the most restrictive features of explicability is the effectiveness of the model. If the model does not achieve the expected results, no matter how much we manage to interpret it, we would be analyzing a model that is not robust.
  • After evaluating and checking the validity of the built model, using the Shapley values ​​technique, we can obtain the importance of each variable concerning the acceptance of the budget.

Also Read: How Artificial intelligence Helps In Recruiting

TechReviewsCorner

Tech Reviews Corner is a place where one can find all types of News, Updates, Facts about Technology, Business, Marketing, Gadgets, and Other Softwares & Applications

Recent Posts

How to Get a Duplicate RC Book for Your Vehicle: A Step-By-Step Guide

If you have lost or damaged your vehicle's registration certificate, you must be tense and…

13 hours ago

Phaneesh Murthy’s Vision for Critical River: A New Era of Leadership and Innovation

Phaneesh Murthy's career is a masterclass in leadership and innovation within the IT services sector.…

4 days ago

Why Evolve Your Logo? – Comprehensive Guide

Few companies miss out on redesigning the logo: it is an essential step in the…

2 weeks ago

Finding Hidden Gems In Online Libraries

In today’s fast-paced world it’s easy to miss the treasures hidden in online libraries. These…

1 month ago

5 Best AI Video Generator In 2024

AI video generators are presently transforming the way companies and content creators create videos. With…

2 months ago

IObit Malware Fighter 11 Free: An Exceptional Solution for PC Protection

Strong anti-malware program IObit Malware Fighter 11 Free guards personal computers against a broad range…

2 months ago