GlobalData’s recent Strategic Intelligence: Tech in 2035 report calls artificial superintelligence (ASI) the century’s most consequential technological breakthrough – but what is ASI?
At its simplest, ASI is a hypothetical form of artificial intelligence whose intellectual capabilities exceed those of humans. It would process and analyse vast quantities of data at extraordinary speed, and crucially, possess the ability to improve itself as it interacts with the world. Such superhuman capabilities could help solve some of humanity’s most enduring problems.
By contrast, the AI systems we have right now are actually quite stupid. The fundamental algorithm by which they operate has not really changed since 2017; all that has changed is the amount of data being fed to it. Yet, despite the millions of internet articles, web pages, and data that AI systems have ingested, they are still no cleverer than humans. It cannot, for instance, apply its knowledge generally to new situations.
This is quite the conundrum. Data, like oil, is not an infinite source. AI researchers are worried that eventually, there will be no fuel left to continue scaling these AI systems. A self-learning AI model like ASI could help fix the problem of diminishing returns. It would be an understatement, then, to say that a lot is riding on its development. Hence, billions of dollars are being poured into making it a reality, and ludicrous pay packages and company valuations are flying about.
The pathway to ASI
The consensus is that ASI is two conceptual leaps away. The next jump will be to artificial general intelligence (AGI): namely, an AI that matches human intellectual capabilities across a broad range of domains.
A crucial feature often ascribed to AGI is the capacity for self-improvement—the ability to refine its own algorithms and learning processes without constant human intervention. Many theorists believe that a self-improving AGI could trigger a rapid cascade of improvements: successive iterations building faster and better versions of themselves, leading to exponential gains in intelligence. This runaway process is the basic pathway by which AGI is hypothesised to become ASI. In the starkest versions of that scenario, superintelligence could appear very quickly, even overnight.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataAlthough this pathway seems incredibly high-tech, it rests on a very ancient concept: the notion of emergence.
Ancient origins
In the fourth century BC, ancient Greek philosopher Eubulides of Miletus began to question the precision of our language and the distinctions we draw along continuums. Most famously, he asked: given a collection of grains, at what point does it become a heap?
Over centuries, responses to Eubilides’ (unsolved) paradox have contributed to the development of fields as disparate as set theory, linguistics, and cognitive science. One of the most influential of these fields is the relatively unknown science of mereology, the study of part-whole relationships.
While you are unlikely to have heard of it, mereology underpins some of the biggest debates in AI. If you think AI can become sentient, then you have a mereological opinion; you believe individual pieces of code can create something more than themselves.
Mereology also helps us understand ASI by asking the right question: can sheer quantitative accumulation (more cycles, more compute, more data) produce qualitative change (a new level of intelligence), or do the parts not create a whole that is greater than their mere sum? In other words, will continual self-improvement of AGI simply yield small improvements, or will it cross a threshold and give rise to an intelligence of an entirely different order?
ASI researchers, whether they know it or not, are quite optimistic about the mereological potential inherent in AGI. Their faith is in a system that can produce emergence.
The future is emerging
Anything that emerges from a system is called emergent phenomena. We can define such things as the novel patterns and behaviors that arise from the collective interactions of simpler components in a system. Or, more simply, a property that cannot be straightforwardly predicted from the parts alone. Emergent phenomena can be hard to pin down, as Eubulides learned. The production of human consciousness, for instance, has evaded us for centuries.
ASI researchers should also be aware that not all systems can produce emergent phenomena. To do so, they must have certain properties:
- Multiple components: A single entity alone has no emergent properties—a single water molecule is not ‘wet,’ rather, wetness is a property that emerges from the collective interaction of billions of molecules.
- Interaction between components: The components must interact in a specific way, as opposed to simple aggregation.
- Non-linearity: Small changes in one part can disproportionally change the overall system in unpredictable ways.
- Separation of scales: An emergent phenomenon operates on a higher plane than the interactions of its individual components.
- Self-organisation: Many emergent systems can produce ordered, coherent patterns without any external or centralised control, such as ant colonies.
Following this framework, we can understand ASI as follows: a set of patterns and behaviours that result from the collective interactions of AGI algorithms as they self-organise into a new whole on a separate scale to AGI.
Of course, it remains questionable whether AGI is such a system. Treating ASI as an emergent phenomenon has both explanatory and practical consequences. Emergence can be creative and powerful, producing systems that solve problems in ways their designers did not explicitly encode, but it can also be unpredictable and uncontrollable. If ASI emerges from complex interactions beyond straightforward design, its behaviour might be opaque even to its developers.
