We already have algorithms at work in our smartphone, drones, smart home devices, security and surveillance systems, music and media streaming services, smart cars, banking and finance and there are many more sectors. Who is driving a self-driving car? It’s AI. Some of the stuff you see in AI movies is really real. Robots are not toying with guns in real life like you see in AI movies. We have a robotic citizen who wants start a family. Saudi Arabia granting citizenship to Sophia was seen as one big step towards the modernization of the country. Now, before Sophia starts its family, let’s see how it all started. Let’s see what is artificial intelligence and what is the history of artificial intelligence.

What is artificial intelligence?           

In this area of computer science, the emphasis is on building software and machines capable of thinking intelligently just like you and I do. The goal is to develop computer systems with the abilities of speech recognition, visual perception, learning, planning and most importantly problem solving and decision-making. Artificial intelligence technology-driven systems tend to imitate human intelligence by showing traits like: 

  • Learning
  • Planning
  • Perception
  • Knowledge
  • Reasoning
  • Problem solving
  • Ability to move objects

To exhibit this behavior, the AI-driven systems heavily depend on algorithms and knowledge attained through consistent learning. Let’s have a look into the evolution of artificial intelligence.

History of artificial intelligence

When the Axis powers and Allies of World War II were involved the war that took the lives of 70-85 million people, Germans started using Enigma code machine to produce codes that were unbreakable, at least Germans thought so.

However, someone had the mathematical ability to crack Enigma code. A team of mathematicians and computer scientists, in which the key role was played by Alan Turing, created an electro-mechanical device, the bombe machine. This intelligent machine not only shortened the war and saved some lives; it also laid the foundation for machine learning.

Dartmouth Conference, 1956

The year was 1956 when the term ‘Artificial Intelligence’ was coined in the Dartmouth Conference organized by American computer scientist John McCarthy. Many new research centers were established across the United States and computer scientists started exploring new possibilities of artificial intelligence technology.

Herbert Simon and Allen Newell played an instrumental role in recognizing the role artificial intelligence technology can play in transforming the world. They both promoted artificial intelligence as a promising field of computer science.

The beginning of serious research

The first outcome of this serious research was Ferranti Mark 1 (1951), a machine programmed to play checkers and chess. This machine was able to defeat an amateur in these two games. J. C. Shaw with Herbert Simon and Allen Newell created General Problem Solver, a computer program that was able to solve Towers of Hanoi and some other simple problems. It was not able to solve any real world problem.

John McCarthy, the father of AI, developed LISP programming language in the 1958. Soon this second oldest high-level programming language was favored for machine learning and AI research.

In the 1960s, researchers were working on the development of algorithms capable of solving geometrical theorems and mathematical problems.

By the end of the 1960s, computer scientists were working on Machine Vision Development. With this, the development of machine learning in robots started. Japanese were the first to develop an intelligent humanoid robot called WABOT-1 in 1972.

AI Winter

AI Winter refers to the period that was not of great prosperity and development for artificial intelligence technology. It all started in early 1970s when there was a scarcity of funds for further research and development in the field of artificial intelligence. Computer scientists were falling short in materializing their promises and fell short in creating intelligent machines. The need of processing power required for processing an enormous amount of data was not fulfilled.The acute shortage of funds during AI Winter came to an end after the mid 1990s.

A new beginning at the end of the first millennium

Many American corporations started showing interest in artificial intelligence development. Other developments were going on in Japan when the government planned to invest in fifth generation computer with machine learning. Soon AI researchers begin to believe that in near future they would be able to develop computer systems capable of showing traits of human behavior.

Garry Kasparov versus AI

This AI versus man encounter caught the attention of both chess enthusiasts and AI enthusiasts. We all are told that the world champion Garry Kasparov lost to IBM’s Deep Blue in 1997. However, we cannot ignore some lesser known facts. It was a rematch. Garry Kasparov was victorious in the first match played between him and IBM’s Deep Blue in 1996.

He had defeated 32 different chess computers in 1985.

Four years later he defeated another chess computer Deep Thought.    

In 1992, out of 37 blitz games he played against the German chess program Fritz 2, he lost 9, won 24 and 4 were drawn.   

The match he lost to IBM’s supercomputer Deep Blue is the most talked about. However, we must listen to his account as well. He was not given access to Deep Blue’s recent games. He also suspected that some human intervention was involved. IBM denied it. He won and lost several more games to different chess computers.                    

Dotcom Bubble Burst and AI 

It was not sidelined but AI became funding-deprived in the early 2000s. However, machine learning and artificial intelligence development was not slowed down. Machine learning methods successfully entered into many governments and corporate domains.

Hardware Improvements and AI

Improvements in hardware provided more storage ability and processing power. Now a huge quantity of data was available. More processing power was available to process and learn from this enormous availability of data.

In last one and half decade

Google, Amazon, Baidu (Chinese technology company), IBM, Cloudera Inc., Confluent Inc. and there are many more companies taking commercial advantage of AI.

Artificial intelligence technology is everywhere. There is a huge segment of population interacting with AI. Still there are people who don’t know – what is artificial intelligence technology?      

Leave a Reply

Your email address will not be published. Required fields are marked *