The value of emotions and deep learning

02/12/2020 57

The value of time

In 1973, Michael Ende published “Momo”, the name of a girl who lives in a town in a distant land, about who no one, not even she, knows anything about her life. One day, some strange men arrive, dressed in gray, with hats and briefcases, they are agents who convince people to invest their time in what would be a time bank, an unequal exchange whatever the point of view.

The author makes us reflect on the importance of perceiving how we are living our time, the time that happens and that we say that it belongs to us, answering questions about whether time can be saved, and why we save it, and then how we use this time that we have saved.

A few days ago, a very loved friend made me reflect on the accounting of the time value and the balance in the personal scale, in which we put our seconds, minutes, hours, years and life cycles on one side, accompanied by the people who they want us well, those that provide us with moments of vitality and energy, regarding these seconds, minutes, hours, years and life cycles that we grant to our purposes or life projects, to our professional activity and the intersection between the two types of experiences.

The concepts of reaching goals or having quality of life, the moments that seem like hours and are minutes, or the moments that seem like seconds and are days, are personal, dependent on the changing context and difficult to measure mathematically, so when we consider, in mathematics, if an hour of work or leisure has been efficient or rewarding, has had a value, or has been an unfortunate moment, we do it from an individual, human perspective and depending on each one, more or less emotional.

In 2020, considering the development of Artificial Intelligence software, we know that the next step in the evolutionary roadmap of software development is the recognition of emotions and their applicability.

AlexNet and deep learning

In 2012, the modern era of DEEP LEARNING began. Three years earlier, in 2009, the Artificial Intelligence researcher Fei-Fei Li published a database called ImageNet, which contained more than 3 million images organized with labels of what was inside. Li thought that if these algorithms had more examples from the world to find patterns between them, it could help them understand more complex ideas.

Thus in 2012, an artificial neural network designed by Alex Krizhevsky, who was a graduate student at the University of Toronto together with his fellow student Ilya Sutskever (now director of research at OpenAI) and with the advice of Geoff Hinton (Turing Award winner 2018, along with Yoshua Bengio and Yann LeCun), dominated the ImageNet contest, beating all other research labs by the huge margin in accuracy of 10.8%.

Technology’s prowess for decision-making soon put the words “deep learning” on the lips of every Silicon Valley founder and executive and sparked what we call the AI ​​boom. All three ended up working at Google, and other big tech companies like Facebook, Amazon and Microsoft began to position their businesses around technology.

As Li had predicted, data was the key. But Hinton, Krizhevsky, and Sutskever had also stacked neural networks on top of each other: one just found shapes, while another looked at textures, and so on. These are called deep neural networks or deep learning. The neural network framework that resulted is now colloquially known as AlexNet and is considered one of the most influential articles published in computer vision, having inspired many more articles published to accelerate deep learning, it has now been cited more than 70,000 times. according to Google Scholar.

Krizhevsky worked at Google Photos and then became deeply entrenched in the company’s autonomous vehicle project, leaving the company in September 2017 to work at Dessa (formerly known as DeepLearni.ng), which was acquired by the financial services company Square in February 2020. Square and Dessa said they were partnering to “build machine learning applications that enable people to do more with their money than ever before and generally push the boundaries of what is possible with AI.”

We speak of DEEP LEARNING when we refer to a field in which neural networks overlap to understand complex patterns and relationships within data.When the output of one neural network is fed into the input of another, effectively “stacking” them, the resulting neural network is “deep.”

Establishing the emotional state

In this 2020, McKinsey’s global survey on artificial intelligence (AI) has included questions about the adoption of deep learning with the result that 16 percent of respondents say that their companies have taken deep learning beyond the stage pilot. Once again, high-tech and telecommunications companies are leading the charge, with 30 percent of respondents from those industries saying their companies have built-in deep learning capabilities.

One of the uses of deep learning is to recognize basic forms of affective expression that appear on people’s faces. Likewise, non-verbal communication methods such as body movements, facial expression, gestures and eye movements can be captured, and it should be considered that body movements can transmit emotional states with more force and be captured with cameras that do not require to be close to the person. Since 2018 we have seen articles published in which,using a neural network architecture with different parameters, they explain how it can be established that the emotional state is recognized in the movement patterns of the whole body.

In 2016, the AI ​​Now Institute emerged from a symposium led by the Obama White House Office of Science and Technology Policy, led by Meredith Whittaker, founder of Google’s Open Research Group, and Kate Crawford, principal investigator at Microsoft Research. The AI ​​Now Institute delves into the short-term implications of AI in the social domains: inequality, work, ethics and healthcare. In its 2019 report, the AI ​​Now Institute warned that emotion recognition technology should not be used in decisions that “impact people’s lives and access to opportunities,” such as hiring decisions or pain assessments, because it is not accurate enough and can lead to biased decisions.

And as Michael Ende makes us reflect in “Momo”, the AI ​​Now Institute in 2018 makes us reflect on the impact of advancement in AI on the recognition of emotions and their applicability. And as a tool it launched a framework for algorithmic impact evaluations (AIA Report), for governments to evaluate the use of AI in public agencies. This assessment is like an environmental impact assessment in that it requires public disclosure and access from outside experts to assess the effects of an AI system and any unintended consequences.

Applying an AIA would allow systems to be examined for problems such as skewed results or skewed training data. And if we also get these results and measures to be registered in a decentralized Blockchain network, perhaps in the coming years with much more training, development, and attention, we could reach this unique, non-transferable and private personal accounting model to work on personal-growth to achieve a globally greater well-being.