Each day brings a new story of how artificial intelligence (AI) has achieved yet another apparent intellectual feat, often better than a human could have done it.

But how has this been achieved, how long has it taken, and how close are we to giving machines human-like intelligence?

What is artificial intelligence?

John McCarthy, the MIT researcher who coined the term, initially defined AI as “the science and engineering of making intelligence machines”.

Expanding on this, McCarthy and his fellow MIT fellows who founded the field of AI research believed this to include any task performed by a machine that would require a human to apply intelligence if attempting the same task.

Nowadays, AI is often divided into two categories: narrow AI and general AI.

Narrow AI refers to machines that are programmed to perform one particular task, such as a facial recognition technology or self-driving car. These systems typically rely on machine learning, in which vast amounts of data are used to train an algorithm. As more data is fed in, the machine ‘learns’ how to best perform its set task.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

General AI refers to machines that are capable of thinking, understanding and learning in a similar way to humans. Machines deemed to have “full” AI must be able to process a situation and make a decision without prior training from a human.

A brief history of AI: Where did it all start?

Scientists first began to consider the possibility of creating an artificial brain sometime in the 1940s.

Alan Turing, a British computer scientist, is the most noteworthy name from this period. During the Second World War, Turing developed a machine to crack the German forces’ Enigma code, which was used to secure communications. This machine would serve as the foundations for today’s machine learning capabilities.

While AI machines are now capable of beating grand masters at games like checkers and chess, this started with the Ferranti Mark 1 computer, which was successfully programmed to beat amateur players in 1951.

The beginning of AI as we know it

However, the term artificial intelligence wasn’t coined until 1956, when McCarthy held the Dartmouth Summer Research Project on Artificial Intelligence, where a number of scientists came together to devise “a way of programming a calculator to form concepts and to form generalisations”.

The conference prompted a golden era of AI research, in which various systems were built capable of performing tasks like solving algebra equations and mimicking human talk.

In 1973, the world’s first humanoid robot with apparent human-like intelligence was developed in Japan. The Wabot-1 was fitted with a limb control system, a vision system and a communications system. With sensors placed over its body, the robot was capable of walking around and picking up objects.

However, as scientists attempted to achieve increasingly complex feats, computer technology at the time presented difficulties that they were unable to overcome. This caused funding from governments and organisations to dry up, leading to a funding shortage in the field known as ‘AI winters’.

Computer advancements lead to AI revival

Interest didn’t pick up again until the 1990s, once the computational issues had been overcome.

And then, towards the end of the decade, IBM’s Deep Blue became the first machine in history to overcome a chess world champion, Gary Kasparov.

As computers have improved, so too has the advancement of AI. Increasing processing power means that more data can be processed, meaning better machine learning.

In recent years, AI has managed to defeat a Go world champion – a game that is considerably harder to win than chess. While chess has 20 initial starting moves, Go and 361.

AI has also taught itself how to walk, wrote a love song and learned how to spot illnesses and ailments better than human doctors can.

The future of AI: Will we achieve artificial general intelligence?

While narrow AI is coming on leaps and bounds, we are still some way off of achieving general AI.

“Computers don’t make predictions and provide insight in the way we like to think they do. For the foreseeable future at least, you will need a human to interpret and put together what the machine spits out – which is why AI and ML can currently only be considered tools,” Peter Finnie, a partner at intellectual property law firm Gill Jennings and Every, previously told Verdict.

We don’t have the knowledge of how the human brain works to recreate it yet. However, artificial general intelligence, in which machines have a brain that functions like a human brain, is theoretically possible.

A recent survey of AI experts found that many expect general AI to be achieved within the next 40 years, but given the current rate of AI development, some feel we could achieve the feat by as early as 2030.


Read more: History of IoT: From idea to an industry approaching $1tn