Ethics in Artificial Intelligence, Biases and their Impact

01/07/2020 21

Artificial Intelligence (AI) is radically changing the world we live in. Although we are not aware of it, there are already numerous applications around us aimed at making our lives easier with this type of technology, a trend that will undoubtedly increase in the coming years.

Along with this exponential and enthusiastic development of AI, we should not ignore the ethical aspects of its exploitation and in particular, the biases that may be part of these future applications.

But what are biases? In our daily lives we make many decisions, the brand of beer we drink, the car we drive, the movies we watch, the books we choose… and most of the time, without knowing it, those decisions are biased.

If we consult the definition of “bias” in the RAE dictionary, it is defined as follows: “Systematic error that may be incurred when sampling or testing selects or favors one response over another.

In the case of people it is our brain that chooses the answer and most of the time it is based on our experience, education or beliefs, which are responsible for that biased thinking.

What happens when decisions are made by machines?

With the advances in technology, the emergence of Big Data and above all of Artificial Intelligence, more and more, it is these algorithms that make decisions. Those choices may be irrelevant like the ones mentioned above, but much more often intelligent systems are beginning to be able to determine whether I will be hired for a job, granted credit or have my credit card limit extended.

In our daily lives we make many decisions, and most of the time, without knowing it, those decisions are biased.

Within AI, it is in machine learning where these biases can occur, since it is based on historical information. As far as machine learning is concerned, biases can occur in various parts of this process and not only in the data, as it would seem logical to think.

First of all, we must define and specify which business problem we want to solve. Imagine that a company wants to do a selection process to find the best employee. This person could be the one with the highest turnover, the one who progresses the fastest in the company or the one with the most knowledge. This first step, which sometimes we do not take into account when estimating bias, has to be precise enough.

The second step is the data we are going to select. It may be that they are not sufficiently representative for the problem we want to solve or that they already include their own bias. An example could be Amazon’s algorithm, which only proposed men for certain positions because their training data contained a majority of male staff.

The case of COMPAS is another great example. The American application that predicts which people might be likely to be guilty of a crime. The error was found to be in the gender and race data sample. More men than women and more people of color than Caucasians. As a result, the algorithm’s prediction gave men of color a higher probability of recidivism.

Finally, it is very important how I choose the information to build the algorithm, ie what variables will be relevant in each case. If we take the example of the selection of employees we could select the age, years of experience, courses taken, merits and so on. The accuracy of the model will depend on the choice of attributes and it will not always be easy to understand the implicit bias included in these data.

Given the importance of biases as they relate to Machine Learning, the last five years have seen an increase in the number of articles, research and tools to mitigate them.

The accuracy of the model will depend on the choice of attributes and it will not always be easy to understand the implicit bias included in these data.

On the research side, hundreds of related scientific papers and many other AI development companies are starting to create tools to measure these biases and find ways to explain the decisions made by their algorithms.

In the case of IBM, not only has it developed a commercial product (Openscale, launched in 2018) but it has also donated an opensource framework, based on Python, AI Fairness 360 to be used by any organization.

Amazon is working with the NSF (National Science Foundation) on bias-related research and is applying the result to its developments.

Google already has algorithms in beta to improve bias analysis and explainability.

It is important that the people involved in the development of these algorithms, as well as the companies that use them, ask themselves if they are taking into account these biases and if they are able to explain the decisions made by their algorithms. While it is true that most large technology companies have published their ethical codes, and at the European level there are recommendations with seven key points Trustworthy AI, there is still a long way to go in terms of ethics.

While it is true that most large technology companies have published their ethical codes, and at European level there are defined recommendations with seven key points Trustworthy AI, there is still a long way to go in terms of ethics.

A recent Capgemini report on ethics in organizations showed, among other results, that companies that took ethics into account had a 44-point advantage with the NPS (Net Promoter Score), the most widely used index to measure customer loyalty and their tendency to recommend that company or that product to others. It is good news that ethics in companies can be a reason to increase buyers and therefore sales, so companies will be much more interested in this section.

Thanks to the digital transformation, now more than ever customers have much more influence on companies, being able to be prescribers or detractors in the social networks of products or companies. Let’s use that power so that the evolution of AI does not leave ethics aside and thus benefits us all.

Let’s use our power to influence companies so that the evolution of AI does not neglect ethics.