In 2000, Google co-founder Larry Page was asked, in an interview, about his 20-year roadmap for the company. He replied: “Google will fulfil its mission only when its search engine is AI-complete. You guys know what that means? That’s artificial intelligence.”

With hindsight, his answer was revelatory, but was anyone outside Silicon Valley taking note? There was no subtext, no veiled suggestion of Google’s intention to lead AI development, just an engineer’s clear-sighted laser focus on the endgame.

In the intervening decades since Page declared his AI ambitions, technology went mobile, entered the cloud, was caught in the cross hairs of financial and political crisis before being swept up in the various hype cycles around augmented/virtual reality, crypto and the metaverse.

Meanwhile, critical conversations about the early direction of AI had already taken place. Tesla founder, Elon Musk, has said, publicly, that Page accused him of being a ‘speciesist’ when he raised concerns that artificial general intelligence – an intelligence surpassing that of humans – might eliminate humanity once machines had no further use for it.

That conversation took place a good decade before OpenAI’s generative AI chatbot, ChatGPT, launched in November 2022 to become the fastest downloaded app in history. As AI captured the public’s attention, governments could no longer ignore its influence.

Investors were also taking notice. In 2023, AI focused start-ups have already made up 35% of businesses selected by legendary Silicon Valley based accelerator Y-Combinator. Global investment in the AI market fell significantly in 2022 having peaked at $127.2bn in 2021 to $72.9bn in 2022, according to research analyst GlobalData. But a flurry of generative AI acquisitions alongside a generative AI venture funding frenzy is steadily gaining traction in 2023.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

However, some, like veteran investor, Warren Buffet, warn caution. Buffet likened the development of AI to that of nuclear weapons. The overwhelming majority of use cases and applications are likely to be positive, but a minority of harmful ones could destroy humanity. Others cite more short-term harms like the spread of misinformation, job losses and the threat to global peace and democracy.

These fears have been echoed by the very creators of the technology themselves. In March 2023, an open letter signed by hundreds of AI experts, including Musk, called for a pause on the development of AI, warning of the dangers of moving too fast without regulatory oversight and unilateral agreements in place.

How should AI be regulated?

In May 2023, ChatGPT maker Open AI’s CEO, Sam Altman, went before a US congress hearing, recommending the creation of a dedicated federal agency to oversee AI, including the requirement for licenses for any company training large language models (LLMs).

While Silicon Valley leaders call for regulatory guardrails on AI development, investors remain giddy. The rush of capital to generative AI start-ups and stocks in AI chip makers such as NVIDIA and AMD, is the inverse of an established dynamic which has, up until now, seen Silicon Valley ‘move fast and break things’ while technologists typically waited for investors to follow.

Nevertheless, the opposing camps of AI optimists and Cassandras both appear to be calling for regulation, albeit different approaches. Some industry leaders believe that blunt force regulation could inhibit innovation. On 30 June, an open letter signed by more than 160 executives at companies ranging from Renault to Meta warned that the EU’s newly ratified AI Act would harm Europe’s global tech competitiveness and leadership.

GlobalData’s 2023 AI Thematic Intelligence report outlines how regulation may hold back the growth of the global AI market. The analyst predicts that the global specialised AI applications market will be worth $146bn in 2030, up from $31.1bn in 2022, growing at a compound annual growth rate of 21.3%.

The analyst breaks down its forecasts for this market into specific sub-categories, including conversational platforms, computer vision, and horizontal applications embedded with AI-driven features such as image recognition, natural language processing and generation, or sentiment analysis.

With such a potential for profit, a question mark still remains over whether the regulatory burden lies on governments or industry. And that is the billion-dollar question, says Andrea Tesei, Centre for Economic Policy Research affiliate and associate professor of economics at Queen Mary University London. While policy-makers scramble to come up with regulatory frameworks, those who understand the fundamentals of the technology have voiced concerns about whether AI ultimately only be self-regulated by those who build it, if at all.

“There’s a lot of back and forth between government. And I guess that ultimately, the problem is deeper in the sense that regulation is by its own nature lagging behind technology. Regulation needs to be regulated by humans and technology, to some extent, by itself. We don’t know, yet fully what AI will be. And any regulation that can be made is retrospective to some extent, because it’s hard to see what the prospective problems will be. Governments will intervene when more regulation is needed. But the damage has been done to some extent,” he says.

The lack of regulatory oversight, so far, means that AI development is firmly in the control of a homogeneous handful of extremely influential humans in Silicon Valley. And some believe that solutions should focus around collaboration with these tech leaders rather than a punitive approach. This camp views business licences for AI models as applying an outdated solution to a novel problem.

LLMs are rapidly proliferating software models. Open source LLMs will soon be widely available to download on our smartphones and will sit within layers of products and applications indeterminable to the end user. Creating a regulatory regime around LLM auditing may be entirely impractical at this point. Many believe that it is already too late for that.

For example, an anomaly of AI development called ‘emergent properties’, is a scenario in which AI systems suddenly and unpredictably acquire skills that they were not programmed for. As AI development continues, human understanding of the processes behind machine learning will become ever more opaque and difficult to predict. Questions arise around how these outcomes be regulated and who bears responsibility for them?

According to GlobalData analyst Josep Bori, self-regulation tends not to work, largely because enforcements are lacking. “Ultimately, tech companies are profit maximising organisations, and so will have strong incentives to either cheat or lobby hard to dilute regulations. And this is significantly easier in a self-regulation environment where large companies dominate the trade bodies which set up self-regulation,” he says.

Bori’s view is that the best approach is to regulate technology in phases, rather than aiming for overly ambitious legislation. “It does more good to establish a few general principles that move the industry in the right direction, and then revisit frequently,” he says citing the EU’s AI Act as an example which he says is taking a measured approach.

On 14 June, the European Parliament approved Europe’s AI Act, making the trading bloc the first region to ratify a set of comprehensive rules in anticipation of AI disruption.

Bori says the rules cover a few key principles including banning biometric surveillance and predictive policing systems, principles for disclosure and risks, and a requirement to register high-risk AI systems. “But for now it stops short of defining LLMs like ChatGPT or LLAMA as high-risk. This may change in future reviews, but it seems to start in the right direction,” adds Bori.

Would government regulation of AI be effective?

Regulatory efforts to reign in Big Tech have been underway for some time. But after decades of widespread adoption and Big Tech becoming integral to the global economy, keeping the technology playing field free of monopolistic behaviours continues to be a challenge. It is those same companies which are now leading the charge for AI development.

The race to build the first general AI is well past the starting block. When this is achieved, current fears around Big Tech’s concentration of power may well seem insignificant in comparison. This raises questions around, perhaps, the futility of applying an old regulatory paradigm to such a new and emerging technology. Collaboration, persuasion – call it what you like – is what many agree is needed between all stakeholders. And while humans still remain social animals rather than cyborgs, many are simply relying on those leading the development of AI to remain firmly in the ‘speciesist’ camp.