Google’s DeepMind artificial intelligence program recently taught itself how to walk. Facebook’s AI developed its own language and fully autonomous cars are just around the corner.
AI is developing at a rapid pace and its creators have concerns.
Timeline for AI
- October 23, 2017
AI features heavily in almost every movie set in the future. These AI robots tend to turn against their creators in a bid to rule the world. The Terminator, I, Robot and EX Machina are all prefect examples of bots gone bad.
While currently just the stuff of Hollywood, an AI takeover might not be as far off as it seems.
Bots have already learned how to lie.
Likewise, Microsoft’s Tay AI took just 16 hours to become a racist, sexist Twitter troll. This showed just how easy it is for AI to become corrupted.
AI experts call for a ban on “killer robots”
AI certainly has its uses. However, the developer community fears that these technologies could one day be used to power advanced autonomous weaponry.
Referred to as killer robots, these creations would have the power to fight wars without human control.
According to an open letter posted by the Future of Life Institute, this poses a huge threat to humanity.
Once developed, they will permit armed conflicts to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.
Backed by 116 leaders in the AI industry, the letter calls for United Nations to ban the development and use of AI weaponry, insisting that the technology is a “Pandora’s box” that will be hard to close once open.
Many fear that these technologies would care less about human issues, such as loss of life or damage to infrastructure, resulting in a greater number of wars and casualties.
The State of Technology This Week
Experts believe that AI weapons could be deployed within the next decade if permitted.
Who are these experts fighting against AI weaponry?
The biggest name on the list is Elon Musk. Having co-founded PayPal, before moving on to Tesla and SpaceX, Musk is one of the biggest names in tech.
The popular businessman previously suggested that we should be more concerned about AI technology than North Korea’s threats.
If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea. pic.twitter.com/2z0tiid0lc
— Elon Musk (@elonmusk) 12 August 2017
Musk has provided plenty of publicity by adding his name to the list, but who else has joined him?
As the co-founder of DeepMind, the AI company purchased by Google in 2014, Mustafa Suleyman is another big name.
Suleyman is also the co-founder of Reos Partners, a conflict resolution firm. Although, AI warfare seems to be one conflict that he would rather stamp out before it begins.
AI pioneer Stuart Russell, who has been researching the technology since the 1980s, has written hundreds of papers over the years detailing his concerns over the speed at which AI is developing. Unsurprisingly, he strongly opposes the use of AI in weapon systems.
Yoshua Bengio is also on the list. As the head of the Montreal Institute for Learning Algorithms, editor of the Journal of Machine Learning Research and co-director of the Learning in Machines & Brains project, Bengio has a lot of authority in the AI circle.
Bengio’s concerns should be taken seriously, given his deep understanding of deep learning.
Gary Marcus, founder of Geometric Intelligence (purchased by Uber) and former head of AI at the tech taxi company, has also signed his name.
Bionic limb expert Samantha Payne of Open Bionics is one of few females opposing autonomous weapons.
To view the full list of names, read the letter here.