# Common sense, and efficient reasoning

**Dartmount, the relevance of knowledge sharing.**

The term “artificial intelligence” and its consideration as part of science, was born because of the workshop held at Dartmounth in July and August 1956. Convened by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM) and Claude Shannon (Bell Telephone Laboratories), it brought together 20 of the most respected voices in computing and cognitive science to make a significant breakthrough on one conjecture:

Dealing with all aspects of learning or any other characteristic of
intelligence, which, in principle, they could be described with such precision
as to program them in a machine that simulates it. For this, ideas and works
were shared aimed at getting machines to use language, form abstractions and
concepts, solve the types of problems reserved for human beings and that machines
can improve themselves^{1}.

Allen Newell and Herbert A. Simon presented their Logic Theorist^{2},
the first program deliberately designed to perform automated reasoning. Raymond
Solomonoff made his presentation on algorithmic information, or how to use the
calculus of probabilities to go beyond the very specific mathematical problems
that computers were doing at that time. The further development of E. F. Moore
in automata was discussed, thus the relevant Moore machine, commonly used
today, was present.

And Minsky presented his idea of proof of only statements that were true in a diagram for the theorem of plane geometry. This last idea was brought by Nat Rochester to IBM. He hired Herbert Gelernter, who with John McCarthy as a consultant, developed the Fortran list processing language. In 1958, John McCarthy proposed the LISP language, “LISt Processor”.

In LISP have been born many ideas of computing, such as the tree data structure, conditionals, recursion, etc. The ubiquitous “IF THEN ELSE” structure, now accepted as an essential element of any programming language, was invented by McCarthy for use in LISP, where it saw its first appearance in a more general form (the cond structure).

This structure is the basis for the programming of “Smart Contracts” in Blockchain developments, which by creating conditional statements in which both parties must agree in advance, a digital structure is born to control the management and execution of the contractual clauses.

The evolution in AI multiplied its development thanks to the fact that the meeting participants clearly exchanged and generated knowledge openly around mathematics and more specifically calculation and logic in an applied research model, demonstrated in the programming of ideas, theories, and theorems, opening great future possibilities.

**Accuracy in the degree of plausibility, mathematics, calculations, logic,
and language.**

“With the perspective of time, in 2006, John McCarthy 3 indicated that in the evolution made in many of the systems, they were restricting the work in logic to make the calculation more efficient. If the development of programming was pointed out in performing more complex calculations, McCarthy works to drive the focus at the framework of a complete logic to keep searching for the creation of systems that can reason about his own methods of reasoning to decide on efficient reasoning. “

The logic, systematized by Aristotle, is based on the need to articulate a thought according to strictly rational criteria. The logical criteria in the world of mathematics and in the reasoning of everyday life as discussed centuries later at Dartmount lead us to the design of AI programs that require binary logic.

In 1308, Ramón Llull wrote the document “Ars generalis ultima”, in which he perfected his method of using paper-based mechanical means to create new knowledge from combinations of concepts. He tells us about a new “Characteristic Method”, which should give humans a possibility to calculate in all the domains that are accessible to reasoning, regardless of whether these are exact, the probability can appreciate the degree of plausibility of the reasoning.

In 1666, Gottfried Wilhelm Leibniz publishes “Dissertatio de arte combinatorio”, following Llull and proposing an alphabet of human thought, arguing that all ideas are nothing more than combinations of a relatively small number of simple concepts. Leibniz was trained with Christiaan Huygens, who turned out to be central to his later development of theories on differential and integral calculus.

Leibniz defended that the **complexity of human reasoning could be translated into the language of calculations,** and that, once understood, **they could be the solution to resolve differences of opinion and arguments. **Among other things, he described the properties and method of linguistic resources such as conjunction, disjunction, negation, the set, inclusion, identity, and the empty set. All of them useful to understand and carry out valid reasoning and differentiate them from other invalid ones. Its logic is one of the most important links in **the process of mechanization of thought according to a mathematical model.**

In the 21st century, exponential progress has been made in the programming of machines, solving complex problems in a simple way. There are more and more specialists working in machine learning (Machine Learning) to automatically detect patterns in a data set and use them to predict future data, or to carry out other types of decisions in environments of uncertainty.

Progress continues in the probability system (Deep Learning) that allows computational models that are composed of multiple layers of processing to learn about data with multiple levels of abstraction. And the field of scientific research and development in neuroscience may come to achieve a more resilient, consistent, and flexible form of artificial intelligence^{4}.

High-impact breakthroughs occur when a common purpose brings together diverse thinkers who collaborate as a team, bringing together historical knowledge, unique backgrounds, disciplines, experience, perspectives, and a margin for serendipity. The focus of the Dartmount workshop was to share knowledge and establish a “how” over a well-defined “what”.

The answer to the “why” seems to have come quickly nowadays after a more than a year of pandemic and with the urgency to achieve the 17 Sustainable Development Goals proposed by the UN, gaining knowledge in the development of holistic, inclusive, and ethical AI could be the immediate answer towards 2030.

*“Una propuesta para el Dartmouth Summer Research Project on Artificial Intelligence (McCarthy et al, 1955)”*).- McCorduck, Pamela (2004), Máquinas que piensan (2a ed.), Natick, MA: AK Peters, Ltd., ISBN 1-56881-205-1, págs. 161-170.
- “AI PAST AND FUTURE”, John McCarthy. The Dartmouth Workshop–as planned and as it happened (stanford.edu)
- Towards the end of deep learning and the beginning of AGI | Towards Data Science