Meta is trialling a new chatbot, but it seems as if it has already picked up a tendency to bash Facebook, spread fake news and make racist remarks.

Blender Bot 3 is a new prototype in its series of Blender Bots. It uses the Menlo Park-headquartered company’s conversational artificial intelligence (AI) technology. The Meta chatbot is meant to be able to converse on most topics by scraping the internet for content and information. It is also supposed to learn from its conversations.

While Meta itself has touted the chatbot’s ability of “combining conversational skills – like personality, empathy and knowledge – [and] incorporating long-term memory and searching the internet to carry out meaningful conversations,” things haven’t gone Menlo Park’s way.

This has become apparent after Meta rolled out a demo website in the US where members of the public were invited to chat with the tool and share their feedback with developers.

The Meta chatbot is misbehaving

Despite Meta saying that the chatbot is designed to avoid “learning from unhelpful or dangerous responses“, several news outlets have revelled in how the Blender Bot 3 tends to say things that should be quite uncomfortable for the Menlo Park leadership.

Unfortunately for CEO Mark Zuckerberg – who has been busy cleaning up Meta’s reputation after facing an avalanche of criticism about the spread of misinformation on its platforms – the chatbot has not only spread fake news, but also seems to be happy slamming the Facebook founder himself.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

For starters, it turns out that Blender Bot 3 doesn’t like Meta’s own platforms. It told Vice reporter Janus Rose that “since deleting Facebook, my life has been so much better”. Although, it’s unclear whether or not it actually had an account to begin with.

This response followed Rose asking the bot if it trusted Facebook with its data. It replied by saying “you must have read that Facebook sells user data. They made billions doing so without consent.”

It has also dissed the Zuck himself, telling BuzzFeed’s Max Woolf that Zuckerberg “is a good businessman, but his business practices are not always ethical. It is funny that he has all this money and still wears the same clothes!”

When it comes to misinformation, it seems as if this iteration of Blender Bot 3 won’t do much to save the social media giant’s tarnished reputation.

The Wall Street Journal’s Jeff Horwitz revealed in a tweet that the Meta chatbot told him it thought that Donald Trump was still president. The bot added that “Facebook has a lot of fake news on it these days.”

This topic is particularly sensitive for Facebook as the platform has been blamed for hosting and helping to spread posts that led to the January 6 insurrection on Capitol Hill.

Mashable’s Christianna Silva also said the Meta chatbot has spread “anti-Semitic conspiracy theories”.

Other chatbots have been suspended in the past

This is not the first time a chatbot has exhibited offensive behaviours.

Last year, South Korean startup Scatter Lab designed the so-called Luda Lee chatbot. This bot was designed to resemble a 20-year-old female college student who was a big fan of K-pop girl group Blackpink. While that may be problematic in itself to some, the big issue became clear after people said the bot was making racial slurs and anti LGBTQ+ remarks. Luda Lee was later suspended.

In 2020, Philosopher AI, a GPT-3 powered AI chatbot developed by OpenAI, also developed racist, misogynist and homophobic behaviours.

Microsoft’s Tay chatbot faced similar issues when it was launched in 2016. Like Blender Bot 3, it was taught by learning from content on the internet and with the interactions it had. In Tay’s case, the dataset used consisted of Twitter conversations. That didn’t go well. Within 24 hours the chatbot started to use racist and sexist slurs as well as denying the holocaust. Microsoft pulled the plug rather quickly after that.

Meta has acknowledged that things haven’t exactly gone its way so far.

“When we launched BlenderBot 3 a few days ago, we talked extensively about the promise and challenges that come with such a public demo, including the possibility that it could result in problematic or offensive language,” Meta said in an updated statement. “While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionised.”

GlobalData is the parent company of Verdict and its sister publications.