Recently we have seen in the media how artificial intelligence (AI) can help in the pandemic with many examples in the prediction and monitoring of it. We have also seen how it can contribute to improving the lives of visually impaired people, with applications such as Microsoft’s SeeingAI or how it can also be a great ally for deaf people.
AI has also been used to prevent bullying, as the startup WatsomApp does in Spain, and we find many examples where it is used in medical applications to help more efficient detection of tumors, or how it can contribute to the development of the autonomous car or sustainable agriculture, combined with IoT.
In short, AI has become an essential technology in a multitude of industries, such as healthcare, banking, manufacturing or commerce, among others, being a great ally for people. The estimated global spending on AI is expected to exceed $50 billion this year and reach $110 billion by 2024 according to IDC. This report also highlights that it will be the most influential disruptive technology in the next decade, and is positioned as the one that will change business models to be more agile, innovative and efficient.
There is no doubt that the benefits that AI systems can bring to society are very relevant, but so are the challenges that their use poses. Not a day goes by without some important news about the social impact or ethical implications of Artificial Intelligence.
AI has become an essential technology in a multitude of industries.
Surely we all remember when Microsoft published its chatbot Tai to be trained in the cloud, and also its withdrawal, because it was trained with offensive words. It is also well known the example of Amazon, where its algorithm to select potential candidates to hire, selected more men than women, again, due to inadequate training.
We have also seen videos with people that do not exist, generated by deep learning (deep fakes) and that seem real, fake news created by an algorithm combining many others, and it is well known the bias contained in the Compas application in the U.S. justice system.
These are just some of the many examples that lead us to question the ethical implications of the use of AI, which is the responsibility not only of the manufacturers of this technology but also of the companies that use it. In recent years there have been many reflections, scientific articles and initiatives in this regard.
On the one hand, manufacturers such as Amazon, IBM or Google, began to enunciate their AI principles a few years ago, which have things in common such as pointing out the use of AI to improve society, respect for privacy, transparency, explainability and accountability among others. At the same time, many observatories have emerged and continue to emerge focused on highlighting the importance of the ethical development of AI such as Partnership on AI, AI Now Institute, The Institute for Ethical AI and Machine learning, some also in Spain as Odiseia or WetheHuman.
All of them aim to raise awareness in society, train and act as a bridge between companies, government and citizens for an ethical use of Artificial Intelligence. It seems clear that this is a challenge that concerns the whole of society, and although most of the AI manufacturing companies are American, curiously it is Europe that is pioneering actions in this field.
Firstly, Europe already established a comprehensive regulation regarding data protection (GDPR), in force since May 2018, and as we have seen data is a key part for the ethical use of AI. Secondly, from Europe, an expert committee (HLEG AI) has also been created with the aim of analyzing the ethics of AI developments from all angles (economic and social) and recommendations and legislation have been generated in this regard. This has been the basis for the new regulation announced in April this year.
The new European regulation, which will apply to all member states, is based on a risk analysis.
Firstly, unacceptable risks, i.e. applications that put people’s safety or lives at risk as well as people’s rights (e.g. a voice assistant that encourages violence) will be prohibited.
Secondly, high-risk applications, which include areas such as critical infrastructures, education, employment, essential public and private services, law enforcement, administration of justice… In this type of applications, before they can be put on the market, there will be a strict obligation to control risks, such as that the data used meet very specific standards, exhaustive documentation of how the systems are executed as well as providing detailed information to the user, among others.
Examples of this include everything related to recruitment applications, assisted surgery, student selection or certain credit granting applications. In particular, all biometric identification systems are considered high-risk applications, and are subject to strict requirements as mentioned above. There are some exceptions, such as the search for minors or the prevention of a terrorist attack.
Thirdly, those considered to be of limited risk are required to include transparency, i.e. the user must know on the basis of what criteria decisions are made. This category could include chatbots in which, for example, the user must be informed that he or she is not talking to a person. Lastly, those of minimal risk, which include video games or spam filters that require fewer requirements. It is true that further progress must be made in risk classification and mitigation, but it is a very important start.
Undoubtedly much remains to be done in this area, and it is not the first time that technological progress has presented major ethical challenges, but perhaps never of such magnitude and affecting society as a whole. Europe is setting a precedent that may help other geographies to begin to do the same.