At this point in time, we cannot travel further than the dark side of the moon, a feat accomplished and rightfully lauded by the team of Apollo 13 in 1970. Half a century on, we remain chained to our planet’s satellite, with telescopes and probes able to offer us some insight into the rest of the universe. Similar to the inhabitants of Plato’s allegorical cave, we merely glimpse shadows of the stars, solar systems, and galaxies beyond our grasp. Yet, one of the things we fear the most, artificial intelligence, may be what finally unshackles us.
The rhetoric around AI, among both the general population and experts, is a mixed bag of awe and apprehension. In March 2023, Elon Musk was among a group that asked for a pause on AI development, citing concerns about the “profound risks” the technology may pose to society. This was followed by a statement from The Center of AI Safety in May 2023 that posited the “risk of extinction from AI” among nuclear war and pandemics. Signatories to this statement included the CEO of OpenAI, the CEO of Google DeepMind, and Bill Gates. In short, it is easy to find information that feeds the worry about the impacts of AI.
A new form of Intelligence
Much of this anxiety can be attributed to the hypothetical scenario of Superintelligence, where AI becomes far more intelligent than humans and can develop itself exponentially. While this eventuality may seem daunting, the possibilities it brings may outweigh our fears.
Autonomous underwater vehicles (AUVs) help scientists uncover the secrets of the oceans by independently collecting data. These robots were created by human intelligence, and we can only begin to fathom the capabilities of robots created by super-human intelligence. Similarly, the much beloved Mars rovers help us map terrain on an alien planet. Vehicles created by ‘minds’ that are more intelligent than ours could venture even further than our red neighbour.
AI is used across a host of other spheres, from medicine to climate change. Again, its development and construction, which help us diagnose diseases or forecast weather patterns, were achieved by human minds—minds that while brilliant, still have their limitations. Superintelligence would uncap these limitations and be able to create far more complex systems.
Emergence
The tides have already begun to change among experts. Despite warning of the risks mentioned earlier, Bill Gates is still investing in AI, contributing to a $1.3bn funding round into AI start-up Inflection in June 2023. Similarly, although he called for a pause on AI development, Elon Musk noted that this was not feasible and instead pivoted into creating a more conscious AI start-up, named xAI.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataThe concept of Superintelligence may be intimidating, but with AI we can see more, reach further, and understand better. Our view of the world and the entire universe, perhaps, may be expanded beyond our imagination…and from the cave, we may emerge.
