With its potential, artificial intelligence (AI) is perhaps the most hotly discussed technology theme in the world. Since the launch of ChatGPT in November 2022, a day has not gone by in which AI did not capture the headlines. And it is only the beginning. GlobalData forecasts that the global AI market will grow drastically at a compound annual growth rate (CAGR) of 39% between 2023 and 2030.

When will AI surpass us?

According to GlobalData, we are only in the very early stages of AI. But even now, AI can do a lot. It can engage in high-quality conversations and some people have even reportedly got married to AI bots. Such incredible capabilities at this early stage suggest how advanced the technology will get. Scarily, at one point, AI could go on to become more intelligent than the most gifted minds in the world. Researchers call this stage of development “artificial superintelligence” (ASI).

So far, many influential businesspeople and experts have made guesses and expressed their opinions on ASI. In April 2024, Elon Musk argued that AI smarter than humans will be here as soon as the end of next year. This is a drastic change from his previous forecast in which he predicted that ASI would exist by 2029.

However, according to GlobalData, this is unlikely. GlobalData notes that researchers theorise that we first must achieve artificial general intelligence (AGI) before reaching ASI. At this stage, machines will have consciousness and be able to do anything people can do.

Although companies such as OpenAI and Meta have publicly committed to achieving AGI and are working towards this goal, it looks like it is going to take years before we see human-like AI machines around us that can do and think exactly as humans do. As a result, GlobalData expects that AGI will be achieved in no earlier than 35 years. Considered the ‘holy grail’ of AI, AGI remains completely theoretical for now, despite the hype.

And considering that ASI is the step after AGI, it is also likely decades away.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

This level of advancement brings to mind science fiction movies and literature, in which AI takes over the world. Notably, Elon Musk has commented on this possibility before, as he argued there is a slim but not zero chance that AI will kill all of humanity.

In September 2023, headlines announced that tech executives like Bill Gates, Elon Musk, and Mark Zuckerberg met with lawmakers to discuss the dangers of uncontrolled AI and superintelligence behind closed doors. Evidently, not everyone is excited about ASI.

Even today’s ‘good enough’ AI with its limited capabilities is concerning tech leaders and world governments. AI-enhanced issues such as misinformation have already caused considerable trouble. Noticing the current and future threats of AI, governments, key influencers, and organizations have taken action. For instance, in March 2024, the General Assembly accepted the first-ever UN resolution for AI to ensure that the technology is used safely and reliably.

In the end, it looks like it may still be some time before ASI exists. Nevertheless, although ASI has the potential to revolutionize how humans and machines interact, steps must be taken today to minimize any potential threats. Perhaps, to ensure ASI remains safe, we must turn to fiction. Maybe the world needs a set of rules like Isaac Asimov’s Three Laws of Robotics, which were followed by robots in many of his stories and prevented the machines from harming humans.