With algorithm Q*, OpenAI may have achieved ‘agent AI’ or first phase AGI. What harm might Q* and its like yield? What good?

Ultimately, the core question for society is not what can AI do but what should it do. It is for society, not the tech giants, to decide, but we haven’t even started to address this question yet.

OpenAI’s implosion: a timeline

On May 17, 2023, AI’s new poster child, OpenAI’s Sam Altman, went before the Senate Judicial Sub-Committee to plead with lawmakers to create regulations that would embrace the powerful promise of AI while mitigating the risk that it will overpower humanity.

In June, Senators Hawley and Blumenthal produced a framework for a bill to regulate the US AI industry and do the probably impossible: square Altman’s circle.

On October 30, President Biden signed an Executive Order requiring AI developers to share the results of their safety tests with Federal agencies before releasing new products onto the market.

A few days later, tensions within OpenAI between the guardians of its original mission to develop AGI for the overall benefit of humanity and the new entrepreneurial culture of bringing new products to market as fast and widely as possible helped to blow the company apart, temporarily at least. Q* may have been the detonator.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The signal $11bn partnership between Microsoft and OpenAI is at the industry’s heart. Microsoft owns 49% of OpenAI, moving eventually to a majority, and is embedding OpenAI’s advancing AI across its product range and boosting its Azure cloud revenues in the process. Such strategic investment gives Microsoft significant control over the future direction of OpenAI.

AI: profit versus safety

Today, post the traumatic OpenAI episode, status quo ante: same AI industry competitive line-up, albeit with more power for Microsoft, and the same concentration of controlling power in the hands of a very few self-regulating firms, the ‘AI 1%’—OpenAI, Google, Meta, Amazon, NVIDIA, and Tesla outside China.

For the 1%, the bottom line is the highest priority, with huge potential profits downstream as their products are deployed ever more widely to a swelling customer base, charged up by what arrived so dramatically a year ago with ChatGPT, and under rising pressure to deploy the latest AI products faster and more effectively than their competitors.

In high policy-making circles, a fierce policy war is joined between the 1% and ‘safety first’ political cohorts and government regulators.

The former fear that anything but the lightest touch regulation would hamper their commercial prospects, while the latter group is under intense pressure from high-profile figures within the AI intelligentsia—who publicly and dramatically fear an AI apocalypse—and from public policymakers fearing AI-induced mass unemployment and social and psychological harm.

Hence, the 1% is intensifying its lobbying against the heavy-touch aspects of the Hawley-Blumenthal Bill, such as ‘pre-deployment licensing’ under which there would be scientific checks on proposed products on what kind of data has been used, bias, and propensities to errors and insecurity.

China races ahead

Its counterarguments will centre on the need to avoid any regulatory measures that impede the pace of US high-tech innovation with the gathering threat from China in mind, as always. This will likely get the ear of a majority in Congress, where one of the few areas of agreement across the aisle is that China is a ‘hegemonic threat’.

China leads in AI-enabled hypersonic weaponry, autonomous systems, quantum communications, and smart cities. The fact is, we cannot put the AI genie back in the bottle. China’s social credit society is already widely regarded as an early harbinger of the AI future.