Businesses should be spending a third of their total R&D budgets on AI safety and governance, warns an open letter that was issued today (24 October).
The letter, signed by 24 AI academics, warned that AI could have an existential threat to businesses and society that is equitable to climate change.
Alongside an allocation of R&D expenditure, the letter calls for a total reorientation of businesses’ R&D procedures to ensure ethical AI standards. For AI to be ethical, the letter states that all AI should be developed with honesty, robustness and interpretability.
However, this letter also describes the concerns that these academics harbour over “unchecked” autonomous AI systems that could assume pivotal roles in societal and business decisions.
Though this sounds like the fear of a sci-fi society, the letter points out that many businesses are already leveraging AI within these decisions.
“As autonomous AI systems increasingly become faster and more cost-effective than human workers, a dilemma emerges,” the letter reads, “Companies, governments, and militaries might be forced to deploy AI systems widely…”
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below formBy GlobalData
An investigation by The Guardian found that the UK government has already implemented AI systems into decisions over benefits and marriage license applications.
A 2023 GlobalData survey also found that 17% of businesses have already wide-scale adoption of AI throughout their company.
The open letter warns of the long-term problems implementing widescale AI systems may lead to.
“Without sufficient caution, we may irreversibly lose control of autonomous AI systems, rendering human intervention ineffective,” the letter states, “Large-scale cybercrime, social manipulation, and other highlighted harms could then escalate rapidly.”
Generative AI has already led to concerns over social engineering phishing emails, with many AI detectors failing to recognise the emails as synthetic content in the first place.
However, tension between tech companies and regulators could slow down the regulation of AI.
According to Global Data’s 2023 Thematic Intelligence 2023 Tech Regulation report, there is often a struggle between innovation and regulation. Without international, consolidated principles agreed upon, the global AI market risks fragmentation.
“Many AI entrepreneurs and developers have argued that excessive red tape could impose unnecessary and burdensome hurdles that stifle innovation,” states the report, “however, the lack of regulation can be equally damaging to innovation as investing in an unregulated space can be considered too risky.”
Concluding their letter, the academics write that AI could be the technology that shapes this century and call for both national and international governance over AI that is dependent on the risk-factor of each system.
Whilst optimistic that there is a “responsible path” for businesses wishing to use AI, the letter warns that AI safety is dragging behind its rapid development.
“To steer AI toward positive outcomes and away from catastrophe, we need to reorient,” it states.