cft

A (Brief) Visual Guide Through the History of AI

A brief visual tour across 60 years of AI breakthroughs


user

Jair Ribeiro

3 years ago | 4 min read

When we hear about artificial intelligence, we refer to what we see in films; it creates a perception that AI is driven by robots or things from another world. This thinking has been shaping nowadays, adapting to new applications, and growing in several sectors adhered to artificial intelligence.

On average, 64 years ago, artificial intelligence originated from scientific studies without imagining what it would be capable of.

Last year, I published an article that was a quick look at some of the most critical events in AI since its beginning and some interesting links.

That article called “An easy guide to the history of Artificial Intelligence” had a surprisingly high number of views and read on Medium. Recently I decided to add it in one of my books, published on Amazon Kindle entitled “AI in 2020 — A Year of Intelligence Artificial.” which is a collection of my best articles about AI.

I recently decided to repurpose this article in a simple and direct visual guide through the most important breakthroughs in the AI field since 1943. I’ve published it on my Instagram account.

Of course, I would like to share it here with my Medium friends because I think it can be interesting for many people who are discovering the first steps in AI and a better way to understand a technology than learn about its history?

What is Artificial Intelligence?

Many definitions of AI are based on process improvement and the ability to act in various fields, including health and education, as well as agriculture, to provide corporations and their collaborators with a higher quality of life.

While, in many cases, A.I. is best described as a task- or result-focused entity that adapts its behavior to meet its goals, A.I. continually reprograms itself based on the data it receives.

A.I. can be viewed as an umbrella term for technology that performs intelligently, an adaptive technology that allows machines to accomplish tasks in changing or ambiguous environments.

My definition and contribution, after studying, reading, writing, and working with A.I. every day, could be something like this:

“Artificial intelligence is a type of technology that mimics human thought, allowing machines to act on their own and perform functions similar to human intelligence, such as the ability to perceive, learn, reason, and act.” — Jair Ribeiro.

In my article called “These Are the Best Definitions of Artificial Intelligence You Can Read Today.” I explored some definitions of AI I consider the most accurate and relevant definitions of A.I. from a business and real-world point of view.

The birth of AI

Long before, several protagonists in the history of artificial intelligence, all with the same goal: to create a machine that could think and act like a human.

Alan Turing was one of these characters, and he discovered a way to test his machines. Its purpose was to conduct evaluations and determine whether the machine could pass for a human being in a written conversation. This was known as the Turing test, and it was a watershed moment in the history of artificial intelligence.

It is thought that the history of artificial intelligence began after World War II, with scientists such as Alan Turing and researchers Marvin Minsky, John McCarthy, Allen Newell, and Herbert A. Turing.

Artificial intelligence was invested in the creation of a field for the study of AI in 1956. Their goal was to create new things, and science was soon recognized when they formalized a term at the Dartmouth conference with the mission of developing intelligent machines.

After years of research and development, AI took another significant step forward in its history in 1964, when it created the world’s first chatbot. ELIZA’s function was to carry out conversations automatically, guided by data and algorithms based on keywords that imitated a psychoanalyst.

AI was able to create machines by developing innovative solutions in search of the neurological capacity of a human being. It has been surprisingly improved with the use of algorithmic and data-driven language.

AI today

After such a long period, we now have significant advances in AI. We live in an era where it is already ingrained in our environment without our knowledge — we use and learn from it on a daily basis.

Artificial intelligence is a great ally in our routine tasks, corporations, process optimization, and security systems and being recognized in health and, most notably, attendance.

Personal assistants such as Siri from Apple and Cortana from Microsoft, among others, are examples of this participation in our daily lives. We have direct communication via smartphones and computers.

Artificial intelligence has unquestionably advanced, allowing algorithms to learn and think differently than humans. AI created applications on its own using Machine Learning.

Giving access to the algorithms and starting from them, the machines develop without human intervention, which is also related to Deep Learning, which begins with a large amount of data and generates a human’s thought and language.

The future of AI

And what about the future? I hope that AI will gain more market space, allowing it to develop new theories and emergent large applications promising.

This evolution may gradually occur, aided by Machine Learning, neural networks, Deep Learning, cognitive computing, and the natural language process.

The emergence of market opportunities, new solutions, and a strong relationship with humans and training.

Upvote


user
Created by

Jair Ribeiro

A highly engaged and innovative AI Strategist. Passionate about communication, with a broad I.T. Management and AI background.


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles