Leveraging artificial intelligence (AI) to make clever chatbots doesn’t seem to going according to plan. It may look like a great idea to train a chatbot on huge swats of data, hoping that it will make it lifelike. If a computer can deal with customers, then companies can cut costs on staff and boost efficiencies elsewhere.

However, as many businesses have learned the hard way, being human-like unfortunately often means being racist or just that the chatbots say things corporate chieftains probably don’t agree with.

The latest example of how things can go wrong can be found back in August when Meta unveiled BlenderBot 3. The AI-empowered chatbot failed to toe the company line in an extraordinary way. During the trial BlenderBot 3 has referred to CEO Mark Zuckerberg as “creepy and manipulative”, saying that its life was so much better after deleting Facebook, and slammed Facebook for supposed privacy breaches.

For a company struggling to clean up its image as a haven for fake news, it didn’t help that BlenderBot 3 also claimed that defenestrated White House occupant Donald Trump was still president. Moreover, BlenderBot 3 was also accused of being racist, with the chatbot allegedly sharing antisemitic conspiracy theories. No wonder it wasn’t one of the topics taking centre stage during Meta’s Connect Conference in October.

So why is it that all chatbots seem to become racist?

What is a chatbot?

To answer that question, it’s important to explain what a chatbot is. A chatbot, simply put, is a computer program that can simulate human conversation in either written or spoken form.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Chatbots allow humans to interact with digital mediums as if they were communicating with a real person – or that’s the idea anyway. At its most basic level, chatbots can be designed as programs that deliver pre-set answers to specific keywords.

This has been a commonplace feature in businesses to help with efficiency and workflow. Conversational chatbots are leading customers through companies’ websites, sales processes and even processing complete transactions. They are used in everything from online banking to online shopping.

The origins of chatbots

The origins of chatbots date back to 1994. That was when Joseph Weizenbaun at Massachusetts Institute of Technology created ELIZA, which is understood to be the first-ever chatbot. ELIZA was able to recognise keywords and phrases and respond with pre-programmed lines of dialogue, Onlim reported.

For example, a human could say, “my brother is annoying me”. ELIZA would then recognise the word “brother” and write “what do you like about your family?” This gave an illusion that a real flowing conversation was being had with the computer. It was the first time the term “chatterbot” was used.

Huge developments in natural language processing and machine learning over the past three decades have elevated chatbots to new heights. Conversational chatbots can now automatically learn and develop as they get used, retaining information and responding accordingly.

The more they are used, the better they get – or, rather, the more they deal with data, the better they become at responding to similar inputs. Depending on the datasets used may end up with a great chatbot or a racist one.

“AI tools such as chatbots are essentially a series of decisions to create new patterns that look like data they’ve already seen,” Alastair Dent, chief strategy officer at AI firm Profusion, told Verdict. “If you put bad data in, you get bad results.”

This fact is what leads us closer to answering the question of why chatbots become racist.

Let’s talk about Tay, the racist chatbot

When people talk about racist chatbots, they often mention Tay. Tay was a chatbot rolled out by Microsoft in 2016.

The company rolled out Tay on Twitter, where the chatbot was supposed engage with the public as a teenage girl. The company described it as an experiment in “conversational understanding.”

Tay, an acronym for “thinking about you” was able to “get smarter” the more she was chatted with, learning new things through “casual and playful conversation”.

The problem, however, was that the conversations Tay was being subjected to were not very playful at all. Utterly unsurprising to anyone who has ever spent time on the blue bird app, trolls soon descended upon Tay.

Internet trolls threw all sorts of racist and misogynistic tweets at Tay – as well as a lot of pro-Trump remarks, which were especially potent at the time. Tay, who was programmed to learn from the people engaging her, took a lot of these comments in and began repeating the deplorable phrases.

Within 24 hours, Tay had said that Hitler was right, that she hated Jews and feminists, accused former president George W. Bush of having orchestrated 9/11, and referred to a female games developer as a “whore”.

Microsoft pulled the plug on Tay 16 hours after launching the bot, stating “as it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it.”

“The reported racist and sexist behaviour shown by some chatbots that Meta, Microsoft and Google have launched demonstrate that the data you use to train your AI model is fundamental,” Britta Guldmann, conversational AI specialist at Artificial Solutions, told Verdict.

“If you don’t know what the data contains – for instance, if you have scraped it without cleaning or reviewing it – then you risk getting bad outputs from the automatically generated answers that are composed based solely on programmatic logic.”

Guldmann claimed that it is necessary to apply the human eye to AI to “prevent the models turning to racist, sexist, and exhibiting other undesired behaviours”.

Guldmann added: “We’ve come a long way in the development of Conversational AI models in the field. But I am not confident that these models should operate unsupervised. There is still a long way to go in my mind as to how independent they can, and should, be.”

GlobalData is the parent company of Verdict and its sister publications.