Milestones in the history of Artificial Intelligence

Pedro Alvarado
7 min readApr 9, 2023

Artificial Intelligence is one of the most strikingly and revolutionary technologies in the XXI century, yet its history goes back much further than the XXI century. From the first experiments in the 1950s to the present day, AI has come a long way full of discoveries, challenges and achievements. But, What are the milestones that have marked the evolution of artificial intelligence throughout its history?

Credits: BlackJack3D

The Turing Test

In 1950, Alan Turing published “Computational Machinery and Intelligence”, where he set out to answer the question “Can machines think?”. To resolve this question, Turing proposed a test to evaluate the intelligence of a computer.

In such a test, written questions are posed to a human being and a computer alike, without the judges being able to tell who is answering. If the judges cannot tell the difference between the computer’s answers and the human’s after reviewing them, then the computer has passed the classic Turing test. But what does it imply that the computer has passed the test? Since the computer and the human provided indistinguishable answers then it is concluded that the level of intelligence of the computer is comparable to that of a human being. But is this so?

One attack on the question “Can machines think?” comes from the philosopher John Searle. In 1980, Searle presented his famous Chinese room thought experiment. Imagine finding yourself in a closed room and receiving a piece of paper containing Chinese characters through a small opening in the wall. Despite having no knowledge of the Chinese language, you have a set of instructions that allow you to generate appropriate responses. Following these instructions, you write a few characters on the paper and return it through the opening in the wall. Someone outside the room will think you fully understand the Chinese language, but in reality you are only following a set of rules, and the Chinese characters on both papers are incomprehensible to you.

The Chinese room experiment suggests that a program cannot give a mind, consciousness, or understanding to a computer, even if the computer and its program appear to be very intelligent.

Regardless of the validity of the Turing test for testing the intelligence of a computer, the Turing test was important because it is one of the very first efforts in the field of AI to question whether machines could ever think.

Dartmouth Summer Research Project on Artificial Intelligence

In the summer of 1956 the first conference on Artificial Intelligence was held. The conference was formally proposed by John MacCarthy -from Dartmouth College-, Marvin Minsky -from Harvard-, Nathaniel Rochester -from IBM- and Claude Shannon -from Bell Labs. The proposal mentioned the following:

An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

The scientists who proposed the conference shared the idea that what we call thinking is not limited only to the human brain and that it can be analyzed formally and scientifically. They believed that the best non-human tool to accomplish this task was the digital computer.

The Dartmouth conference was where the term “Artificial Intelligence” was coined by John McCarthy and from there the term began to be widely used.

The Dartmouth conference was important because it was there that the term “Artificial Intelligence” was coined and from there the term began to be widely used. And also because it was one of the first attempts to start doing scientific research in Artificial Intelligence in order to replicate what we call “thinking” in a digital computer.

Deep Blue

Now we jump to 1997, the year of an event that first demonstrated the potential of AI to outperform humans in complex tasks.

For decades, technologists considered chess as a yardstick for AIs, as it was a game that required strategy, careful reasoning, logic and foresight. However, IBM’s Deep Blue defeated chess champion Garry Kasparov in 1997 in a six-game match. The machine used a combination of artificial intelligence, algorithms and computing power to analyze millions of possible moves and choose the best move at any given moment.

But does the fact that Deep Blue beat Garry Kasparov prove that the computer was smarter than the human? Personally I don’t think so. What it really proves is that computers have a lot more computing power than humans. However, I think the importance of Deep Blue is that it showed that computers were capable of being just as good at tasks that were thought to be reserved for humans.

I think it is important to reflect on the following: A computer with gigantic computing power but weak AI managed to beat a human chess champion with general intelligence but with computing power far inferior to the computer. So, what would happen with a computer with gigantic computing power and general intelligence?

AlphaGo

Go is a board game that originated in China. It is played by two players in which black and white stones are placed on a 19 x 19 board. The objective is to control a larger territory than the opponent, and the stones can be captured and removed if they are surrounded by opponent’s stones. It is a complex game due to its sophisticated strategy and the large number of possible positions (greater than the number of atoms in the universe).

In 2016, AlphaGo, an artificial intelligence, made history by defeating a top-ranked professional go player with no handicap when it beat Lee Sedol from South Korea.

In one of the games AlphaGo pulled off a move whose strategy stunned the humans. At first, the move didn’t seem to make sense to the humans, but later it made perfect sense. I find it interesting how humans have been playing Go for thousands of years, and then this AI comes along and generates strategies that humans haven’t thought of in thousands of years.

A year later, an improved version called AlphaGo Zero, learned on its own by playing millions of games without human data, and outperformed its predecessor. In a sense, AlphaGo Zero discovered or experienced thousands of years of human understanding, creativity and training, and then created superior strategies in a matter of days.

I find it very interesting how AlphaGo demonstrated the ability of AI to learn complex things partly on their own. But also that it discovered new strategies without help from humans. Which shows that AI can help us discover new things we would never have thought of.

ChatGPT

Every once in a while, an innovative product comes along that transforms a technology that was relegated to the realm of experts and hobbyists into something that everyone can use. On November 30, 2022, Artificial Intelligence had what could be this kind of innovative product.

Imagine you are at a party, and you meet someone who is very smart and always has something interesting to say about any topic. You can ask him anything you want, from how social networks work to how to make the best chocolate cake.

Well, ChatGPT is like that smart friend, but in the form of an artificial intelligence tool. You can talk to it at any time and ask it anything you want, and it will answer you as if it were another human.

Since its launch on November 30, 2022, it attracted attention for its detailed answers and articulate responses in many domains of knowledge. However, its factual imprecision has been identified as a major drawback.

One of the most surprising things about ChatGPT was its rapid increase in users. In just 5 days after its launch it already had over a million users. Currently (4 months after its launch) it has more than 100 million users. ChatGPT is important for many reasons, but one of them is that it turned AI from something isolated for a small group of people into something that everyone can use.

What’s next?

It is almost impossible to say what will be next in the history of AI because advances in the field of AI occur week after week. The future of AI is very fuzzy, as we cannot predict with certainty how certain advances or events will unfold.

However, the history of Artificial Intelligence is a relatively new and evolving story. There is still much to be discovered and developed in the field. Everyone can get involved in the history of AI, either by working in the field, learning about it, or just having a basic understanding of its significance for the future.

--

--