ChatGPT is the spiciest thing in artificial intelligence (AI) right now. Powered by OpenAI’s Chat GPT-3 large language model (LLM), it’s a computer program that can understand and converse back to users in a way that feels extremely close to talking with a human.

The sophisticated generative AI chatbot attracted one million users in just four days, and has left other industry giants scrambling to announce a version of their own.

The power of ChatGPT has amplified speculation on what is possible with the power of AI – with people already finding cool ways to use the system, including forming weight loss plans, writing code, creating whole stand-up routines, and templating emails. Of course, speculation comes with two sides, and there has been a lot of talk about how it could impact the job roles of humans. Administration tasks and other services, for example, have the potential to be streamlined and carried out cheaper by an AI.

Why are experts concerned about ChatGPT?

There is also a more sinister side of speculation when it comes to the safety of rising chatbots like ChatGPT too. For example, some experts have spoken out about concerns to cybersecurity. If ChatGPT can write code – what is stopping it from writing code for a ransomware program?

“Recent systems can also provide individuals who have little or no coding ability with a tool to create or finetune malware that others have created to make it more effective,” Chris Anley, chief scientist at NCC Group, said.

“For example, large language models can generate many variations on a specific piece of malware very easily, so defenses that depend on recognition of a verbatim piece of code – such as basic endpoint detection and response (EDR) software – can sometimes be bypassed by this generated malware.”

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

However, ChatGPT does have safeguarding features built into its programming, with the aim of preventing bad actors from using it this way. When asked to write code for a ransomware program ChatGPT will refuse, claiming “my purpose is to provide information and assist users… not to promote harmful activities.”

But, according to Forbes, some researchers say they’ve already been able to find a way around these restrictions – and there are concerns that future language models might not even have these safeguards in place.

As well as malware coding, experts have pointed to the misinformation that chatbots like ChatGPT have the potential of providing users.

“There are many potential pitfalls of using AI, especially when it comes to chatbots like ChatGPT,” James Owen, co-founder of Click Intelligence, told Verdict.

“The main concern is that, due to their reliance on large, often biased datasets, chatbots can often give inaccurate or even dangerous answers to questions, and as AI continues to become more advanced and widely used this risk increases.”

This concern is backed up by Google’s search engine boss. Even industry heavyweights aren’t shying away from the dangers of their AI products.

Hot off the announcement of their own experimental AI chatbot ‘Bard’, Prabhakar Raghavan, senior vice president at Google and head of Google Search, warned against the “hallucination” of smart chatbots such as ChatGPT.

Raghavan said: “This kind of artificial intelligence we’re talking about right now can sometimes lead to something we call hallucination.”

“This then expresses itself in such a way that a machine provides a convincing but completely made-up answer,” he warned.

Google and OpenAI have been contacted by Verdict for comment.

GlobalData is the parent company of Verdict and its sister publications.