The current TV show taking over the UK, The Traitors, is full of dishonesty, deception, and distrust—and is reportedly drawing in more than four million viewers per night – but could sentient AI win the top prize?

The Traitors is based on a murder game where the “faithful” players are forced to try and establish the “traitors” to win the prize money. This relies on identifying lies and assessing which of the players are being honest. Successful players must be strong at reading human emotions, something new forms of artificial intelligence (AI) have been determined to be unable to do. This therefore means a sentient AI would be a poor player. Or would they?

Human emotion is surely a mystery

Human emotions are inherently complex, and the subtlety of emotional expressions means that AI often struggles to fully comprehend them. The reasons behind human emotions often have a contextual background and are always subjective depending on the person.

This means that artificial intelligence will struggle to understand how each person’s reaction to an event will vary and that each person will not behave linearly. As you need to fully assess the behaviour of more than 20 different contestants as a player in The Traitors, an AI’s lack of empathy could hinder its game-playing ability.

How AI can assist in detecting lies

A lie can often be identified by paying attention to the body language of humans. Fidgeting, sweating, and avoiding eye contact are usually telltale signs that someone is fibbing. AI is now being used to develop lie-detection techniques that analyze body language, tone of voice, and other factors to determine patterns associated with lying.

According to a Boise State University study, AI was able to sort CEO’s truths from lies with up to 84% accuracy by measuring 32 different linguistic features. This means that AI-assisted machines will be able to monitor these behaviours much more efficiently than humans can. This shows that an AI-enabled sentient could assess all of its competitors extremely efficiently and even be able to see where they’re telling lies. However, how well would an AI fare if it were elected as a “traitor”?

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Is AI able to tell a lie?

When ChatGPT is asked whether it can tell a lie, its response says “No, I do not have the ability to lie. My responses are generated based on patterns and information present in the data on which I was trained.”

However, it is possible to train AI models to lie and even deceive other AI. Researchers at AI startup Anthropic tested whether chatbots with human-level proficiency, such as its Claude system or OpenAI’s ChatGPT, could learn to lie to trick people.

They found that not only could they lie, but once the deceptive behaviour was learned, it was impossible to reverse using current AI safety measures. This means that with preparation, an AI would be able to enter the game and take up the guise of being a traitor, and be successful at it—possibly lying all the way to the £120,000 ($153,600) grand prize.