OpenAI, the creator of ChatGPT, has announced it will address the problem of AI “hallucinations” with a new training method – but experts believe the process to completely remove these damning malfunctions could be a long one.

AI hallucinations are when generative chatbots produce misinformation as if it were factual.  

Earlier this year, Google’s rival to ChatGPT, Bard, notoriously made inaccurate claims during its press demo. The newly revealed chatbot suffered backlash when it wrongly claimed the James Webb space telescope was used to take the first photograph of a planet outside the Earth’s solar system.

In a recent report, OpenAI researchers said: “Even state-of-the-art models are prone to producing falsehoods —they exhibit a tendency to invent facts in moments of uncertainty.

“These hallucinations are particularly problematic in domains that require multi-step reasoning, since a single logical error is enough to derail a much larger solution.” 

OpenAI said it is attempting a new strategy to fight against hallucinations named “process supervision”. Simply put, it is going to train AI models to reward themselves for every piece of information they get correct.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The researchers claim this approach will lead to more efficient and reliable AI, as it encourages the models to think more like a human with a more grounded “chain of thought”.

However, some experts believe that completely getting rid of hallucinations in generative AI will take a long time.

“It’s likely that the journey to completely removing hallucinations will be a long one,” Greg Bortkiewicz, digital marketing specialist, told Verdict.

“Even if OpenAI were to announce that it had removed hallucinations, the news would likely be met with quite some scepticism,” he added.

Bortkiewicz said ChatGPT should not be “solely relied on to produce accurate content” and needs to continue to be proofread and verified.

OpenAI has been at the forefront of the generative AI boom with its ChatGPT application. The mega chatbot amassed over 100m monthly users in just two months of its release. 

The popular chatbot had over $13bn invested into it by Microsoft and currently is sitting at a valuation of $29bn.

GlobalData is the parent company of Verdict and its sister publications.