Artificial intelligence (AI), we are told, will transform our lives. The technology is still in its infancy; we will not be seeing the all-knowing AI portrayed in Sci-Fi films for many years. However, in the next five years, AI will become increasingly necessary to the survival of businesses. As government policies struggle to keep up with technological developments, AI developers have been left to determine their own ethical and regulatory guidelines.
The artificial intelligence arms race
The potential of artificial intelligence has triggered an arms race among global tech giants vying for supremacy. These companies have invested billions of dollars into R&D and acquisitions, developing ever more sophisticated technology. We cannot expect universities or smaller companies to produce the same level of innovation while competing with the financial and computational backing of tech behemoths. It often appears that any start-ups that show promise are immediately acquired and integrated into these companies’ AI ecosystems. Even positive AI initiatives have succumbed to the giants’ economic and data processing power. Open AI, previously a non-profit AI research organisation, was set up in 2015 to build AI software that is socially beneficial. However, in 2019 it restructured itself as a for-profit organisation while also agreeing on an exclusive computing partnership with Microsoft, worth a billion dollars of investment.
Potential conflicts of interest
As AI implementation becomes more widespread, elected governments may have to rely on universities for policy guidance, but there is an increasing potential for conflicts of interest. In July 2019, the New Statesman magazine released evidence of millions spent by tech companies in attempts to shape academic debates on the future of AI ethics and policy. Google and its AI subsidiary, DeepMind, have donated funding and grants to the Oxford Internet Institute (OII), which explores the ethics of AI and the civic responsibilities of tech firms. In Germany, Facebook donated $7.5m to set up the Institute of Ethics in Artificial Intelligence at the Technical University of Munich. Tech giants are not afraid to use their vast monetary resources to dictate the future of AI.
The political and economic power these tech giants hold over governments raises questions over the level of scrutiny companies will face from AI regulation, a point supported by Denmark’s appointment of a tech ambassador to Silicon Valley. As technological developments continue to outrun governance, we are forced to trust big technology companies to determine their own ethical code. What’s worrying is these companies’ track record of violating public trust: the Cambridge Analytica scandal, the non-consensual collection of biometric data, tax avoidance, environmental damage, are just a few examples. Questions must be asked about who to trust with the ethical future of AI.
AI regulation is not keeping pace with industry growth
We risk entering a new Gilded Age, as AI dominance further entrenches the power of tech companies, making them and those who run them, unimaginably wealthy. We may only be starting to see AI’s potential, but we risk an imbalanced and uncertain future if we don’t ask questions about how to regulate the use of the technologies that are set to radically transform our lives.
Download the full report from
View full report
GlobalData's Report Store
GlobalData is this website’s parent business intelligence company.