
Coexistence with this new intelligent “entity” is the greatest challenge we will face as a species. (Published in Muy Interesante on 12/27/2024)

Xabi Uribe-Etxebarria Jan 22, 2025
Some 50,000 years ago, somewhere in the Middle East, there was a meeting. A unique encounter that changed the course of human history forever.
An individual from the Neanderthal family, the human species that had dominated for millennia that peninsula of Asia that we now call Europe, encountered his most important challenge.
The same species that had faced countless challenges throughout its existence; predators, natural disasters, pandemics and ice ages, now had a greater challenge. It found itself with a new “being” as intelligent as itself, Homo Sapiens, and what had differentiated it as a species, such as cognitive capacity, reasoning, symbolic thought or the ability to communicate through speech, was no longer an advantage over this new “being”.
For the next 20,000 years, both species shared the same ecosystem, but eventually our species, Homo Sapiens, ended up displacing, absorbing and extinguishing the aboriginal species from European territory.
We do not know for sure whether there were conflicts or coexistence during this period. But what is certain is that there was interspecies hybridisation. A small percentage of the genes of some modern humans contain DNA from these Neanderthals.
The tools
Sapiens, our species, not content with its biological capabilities, has created technology throughout the history of humanity to try to improve itself, creating tools that enhance us both physically and intellectually and thus further increasing our advantage over other animal species.
We created axes or flint knives as an extension of our teeth. Bows, arrows and slings as an extension of our arms so we could throw stones harder and further. We created carts pulled by animals and later by combustion engines as an extension of our legs so we could travel faster and further. Tools are closely linked to our physique, to such an extent that they have contributed to our physical change. An example is the domestication of fire and the evolution of our digestive system.
But we have not been satisfied with just wanting to improve our physical abilities, we have also wanted to increase our cognitive abilities. We created writing, as an extension of our memory and communication capacity. We created mathematics to explain the universe around us and, in combination with writing, also to be able to make more complex and precise calculations.
Millennia later, we created computing machines and computers as an extension of our reasoning capacity and so on with countless technological advances that managed to overcome our cognitive limitations, until we finally created Artificial Intelligence.
“Weak” and “General” Artificial Intelligence (AGI)
In this sense, AI has long been able to perform certain cognitive tasks better than the most capable human being would. Playing games like chess, translating texts, detecting cancers in X-rays or CT scans where the human eye cannot see, are some examples.
This is what we call “Narrow artificial intelligence” or “Weak artificial intelligence”, and each task; playing chess, detecting anomalies in images or autonomous driving requires different algorithms or at least requires different training* for each of these algorithms.
In other words, if we wanted a weak AI model that can play chess better than the world champion to drive a car autonomously, translate text, or detect cancers in X-rays, it would be impossible. They are trained to do only one task.
In short, weak AI are not generalist algorithms or cannot perform tasks other than the purpose for which they have been trained.
However, the same does not happen with the human brain. We can teach a child to play chess, and we can teach that same child to speak a language, to locate animals in pictures, to walk or to drive a vehicle, and he will learn without difficulty. The human brain is a machine prepared to learn an infinite number of tasks. It is a generalist or at least, although (the brain) has different parts, it is orchestrated to function as a single generalist organ.
If we could do that with a machine, we would have achieved what we call Artificial General Intelligence, AGI.
Until the emergence of a new architecture of artificial neural networks, the “transformer” type networks (which gives its name to the “T” in GPT) and with them, the so-called large language models (LLM) based on these large artificial neural networks, it was thought that AGI would arrive beyond 2045, but today that point is as close as it is diffuse.
Many people think that AGI will be binary, that is, that an AI model will either have it or not have it. Something like having life. Although there is also some debate, there is a certain consensus that life is binary, either you have life like an animal or a plant or you don’t have it, like a rock. That is, they claim that there will be a technological advance that will allow it. A singular point in time and it will be a before and after. Metaphorically it would be something like the Abiogenesis of AGI. Term that biologists use to describe (in one of the theories) that moment when life was created from inert matter.

Image credits (ChatGPT 4o, OpenAI), prompt Xabi Uribe-Etxebarria
But others of us think that although there are catalysts, AGI will be and is gradual. Even thanks to the technological advance of these large language models (LLMs) we have already reached a certain degree of “generality” in AI and with the new “reasoning” models such as o3 announced a few days ago by OpenAI, even more so.
Although there is no consensus on the definition of AGI, according to most experts and publications, AGI is a theoretical representation of an artificial intelligence capable of solving complex tasks with general human cognitive capabilities.
Thus, like a child’s brain, today’s digital brains are already prepared to do most of the cognitive tasks that humans are capable of doing. In other words, they are already generalists and are becoming more so. They can translate texts, solve mathematical problems, answer questions better than the average human, create images and videos, understand a video, program code, solve complex mathematical problems, summarize texts, reason, and even learn to walk if we provide them with a “body” like that of a robot. And all of this without having been specifically trained for each of these tasks; they are emergent properties.
The emergency
Emergent systems are those in which the properties of the new system resulting from the grouping of other small parts cannot be explained by the properties of its constituent parts, but only by their integration and interactive nature. In other words, capabilities emerge that apparently have no relation to the properties of the parts that constitute it.
A clear example of an emergent system in the field of physics are phenomena such as humidity (a single drop of water does not produce humidity) or the superconductivity of some materials. But another example that is easier to understand could be that of a film broadcast on an LED television, compared to the elements that make up the different images, the pixels. The resulting film, the message and the plot that we perceive could not be explained with the properties of a pixel, which only changes color at a certain frequency and the orchestration of all of them results in a story that is even capable of transmitting a message with complex, comic, romantic, dramatic or didactic content…

The synchronization of movements of a flock of birds is not a property of a single animal, but of the group as a whole. This is another example of how individual interactions generate emergent behavior (Image credits (ChatGPT 4o, OpenAI), prompt Xabi Uribe-Etxebarria)
In humans, reasoning, creativity and consciousness could also emerge from some kind of collective behavior of the neurons in our cerebral cortex.
Something similar could be happening in these large language models since, as we have seen, unexpected behaviors and qualities emerge, such as reasoning, creativity and perhaps even some degree of consciousness. And of course, among them, the ability to perform generalist tasks, which leads us to affirm that we have already achieved some degree of AGI.
It is not a tool
This time we are not creating a simple tool that augments us as many claim, but rather an entity with agency , with the ability to make decisions, with an entity in itself. Something like creating a life of its own.
This ability to act intentionally, or with agency, is what, for many philosophers and thinkers, has distinguished human beings from other animals and, of course, from machines.
A hammer or a wrench are tools, even something as powerful as an atomic bomb, but neither of these inventions has agency , that is, it does not have the ability to make decisions. Agency lies with whoever creates them or uses them. Humans and, in my opinion, animals, at least to some degree.
Weak AI applications, such as a voice recognition system or a system that detects cancers in images, are also tools. And just like a screwdriver or a drill, they are made for a specific purpose and therefore have no agency or entity of their own. But AGI is a very different invention.
And if this is so, why do we continue to deny the evidence? Why do we continue to say that it is a tool, that they do not have agency or that AGI has not arrived?
Partly out of ignorance, perhaps out of interest, but mainly out of fear, fear of being displaced by this new intelligence, fear of change, fear of losing a way of life, fear of the uncertainty of seeing that cognitive abilities that were thought to be exclusive to humans, such as reasoning, creativity, generality and probably a certain degree of consciousness, have already been acquired by machines and we lose that “status” that the “human” is a “being” with “magical”, “divine” and irreplicable characteristics, which differentiate us from machines and the rest of the animal species.
A fear similar to that felt by our ancestors when Copernicus or Galileo stated that the Earth was not the centre of the universe or when Darwin proposed that humans are the product of evolution and not of divine construction.
After all, knowing that our mind can be replicated in a machine makes us open our eyes and in some way we are rejecting or questioning the exceptionalism of the human being as the pinnacle of intelligence, but that has more to do with beliefs than science.
The Superintelligence
In November, a report to the United States Congress defined AGI as systems that match or exceed human capabilities in all cognitive domains and that would surpass the most capable human minds for each task. As in this case, many definitions are raising the bar, and in my opinion, this definition would be closer to the concept that is now called Artificial Superintelligence (ASI), which commonly refers to an intelligence far above geniuses and the most gifted human minds. But whatever the correct nomenclature, the point to which this new definition refers is just around the corner. Sam Altman (CEO of OpenAI) and Elon Musk, as well as the recent Nobel Prize winner in Physics Geoffrey Hinton, all agree that it will arrive before the end of the decade, and some even claim that it could arrive this year.
The agentic era
We are thus entering 2025 immersed in AGI and with the first signs of the agentic era . A new era in which intelligent agents will soon surpass humans in both intelligence and number. In other words, in a very short time there will be more AI agents than people on Earth.

Image credits (ChatGPT 4o, OpenAI), prompt Xabi Uribe-Etxebarria
The great challenge
50,000 years after that encounter between those two species, a new encounter has taken place, but this time, the new “intelligent” species is not born as a result of biological evolution, but of technological evolution, in short, of our own “intelligence”.
How will or how do we want coexistence with this new entity to be? This is the greatest challenge that we will face as a species . There may be coexistence, confrontation or probably, as occurred with the Sapiens and the Neanderthals, there will be interspecies hybridization.
Xabi Uribe-Etxebarria
Founder & CEO, Sherpa.ai