How did a suicide terror attack in Israel lead to the establishment of a voice analysis emotion recognition company? In this interview, Nemesysco CEO Amir Liberman reveals why he founded his company and how he thinks technology can help entities uncover people’s true emotions through natural conversation.

Unusually for a tech company these days, Nemesysco states outright that it does not use Artificial Intelligence (AI). Using so-called Layered Voice Analysis (LVA), the Nemesysco algorithm interprets a person’s emotion based on uncontrolled psychophysical changes to their voice. “AI is a trained model and the trained model is biased by the person who trained it. But this is not an AI-based system,” Liberman insists, bucking the trend of AI-in-everything, particularly prevalent in the Israeli startup scene.

Founded in 2000 in Netanya, Israel, Nemesysco started out as a security company, but eventually branched out to also offer fraud prevention tools, automated risk assessment, personality tests and pre-employment evaluation systems. Clients and partners include Nestle, Endemol, Alianz and Castel. Here’s our interview with Liberman:

Tell me a bit about your company, Nemesysco.

Back in 1997, we had this terror attack in Israel and what I wanted to do was to build a lie detector. This terror attack took the life of three young mothers. It truly was heart-breaking. What I wanted to do was to build a quick lie detector that we could put on the borders and ask people when they come in to Israel, “Do you plan to commit a suicide attack?”

So that was the very naïve thought of a 24-year-old guy. And when we started researching, it was initially of course just me with some friends. We started with a very naïve way, we took people and recorded them saying “What colour is the sky? The sky is green, grass is blue, I’m the Queen of England …” And what we got eventually was that there was very little reaction, nothing was really standing out.

And then we had this situation where one of my friends asked a very blunt question, but the guy actually lied, and the system picked it up like a bomb exploding on the screen. Everything turned red. So that was the moment when we actually realised that lies have to have some meaning, there has to be something of essence. Then what we discovered was that it wasn’t the lie that we were picking up, it was all the different ingredients of a lie.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

We got a university dataset that was prepared in Israel, and it was made of a Stroop test, when it shows you a card and the card says red (but) written in green, which creates the same reaction in the body as a lie would. That was the assumption at least in our system. Actually this is not a lie – there is a conflict but it’s not a lie.

So is lie detection still the core part of your business?

No, I would say not. We were originally purely a security company. But back in 2005 we made a switch.

For an entity, yes, lies are interesting, but how about learning more about your employee’s personality? The real personality. You know, they reveal things that they didn’t even know themselves once they talk about it, once they are confronted with the result. [As an employer] why don’t you try to understand who the [employee] is, to put them in the right job?

So at some point we said, okay, it’s not lie detection. The thing about lie detection is that there is no such thing. Take it from me, I have been dealing with this for 24 years. There is no such thing as lie detection!

All you can do is present the stimuli and show the reaction. What we can pick up is the moment where you feel the jeopardy, when you feel the tension, when you want to fight, but still run away. It’s actually a very interesting moment in time when you are past a certain point that you don’t want to confront. It’s a very strange situation. So, we can pick up these sets of emotions, but it’s not a lie: it’s the reaction around the lie.

What sectors does your company work with?

We still work with governments. We work with investigations which is actually a very positive experience because you cannot use our system and torture someone for example. It doesn’t work. So we kind of promote fair investigation and I really like that.

We also work with insurance companies, recruitment agencies, credit services, banks and marketing agencies.

We work with HR from entry point to exit point. From recruitment and veracity assessment to personality assessment, to interviews and how one is feeling in the organisation. All the surveys that people do, where they may polish the truth a bit. They may not reveal things just because they don’t want to hurt anyone. But this is counterproductive, to them and to the organisation.

We work a lot today with medical applications, as well. And it goes way beyond that, (as) we’ve also been used in entertainment with games and with matchmaking TV shows. We were presented by Big Brother on several occasions.

Why is it important to have emotion recognition rather than just voice analysis?

The question is very simple: Do you want to know the truth? As a manager, I need to make decisions based on good data. And if the data is not accurate, then it’s not going to serve anyone.

Don’t you think humans already have the capability to detect emotion? Do we really need technology or AI to do this?

Well, I’m not sure if AI is the right thing; we’ll talk about AI in a second.

The thing is, first of all, humans are very bad in detecting other people’s emotions. That’s a fact. As humans, we always come with our own bag of emotions from home. We had a bad day, something happened to the cat, something happened to the spouse; you know, you’re judging everything very differently. So we want something that is unbiased completely. That looks at you as you are.

That’s a very interesting choice of words. Specifically because a lot of people opposed to emotion recognition would say that it is inherently biased and that bias cannot be taken out of it. How do you respond to that?

Because that is AI. AI is a trained model and the trained model is biased by the person who trained it – absolutely. But this is not an AI-based system.

AI is like a DNA examination. We know what specific sequences in voice patterns look like and what they should be associated with in terms of emotional reaction. We know what it looks like; we don’t need an AI to do that.

That’s the challenge with emotions really, there are no scales. There is no set of proper definitions of what happiness is. This guy is happy three points, this guy is happy 60 points. There was no scale. We were the first to actually offer a scale. So this is what we do.

It was really working from the ground up building the entire theory. Not just as a theory but as evidence-based science. The first to actually offer quantifiable measurements for emotions.

Our technology was developed from the ground up based on initial bio-markers that were calculated from the voice without any assumed meaning. We were just observing them changing during real-life calls.

Then during our everlasting research on recorded audio that we received – some unique sessions generated a unique reaction in some of these variables. Our work was manual, but these very unique and obvious sessions were then added to the assumption list and validated against other data that was less significant from the same family of emotions.

But it is still a human that defined the scales. Does this not leave room for bias?

Initially we analysed and identified stress. When we started, it was all about stress. There was no knowledge about any other emotions. It was not defined. Everything was stress. And we took recordings from crashing planes, from pilots with a mayday alert. We took recordings from death row prisoners just before the thing and there were things that were standing out. And so we said, okay, this is an extreme state of stress and this is a normal state of stress. And now, let’s build a scale between them.

Now we could normalise it between zero to 100. We only dealt with real-life data and with real-life materials.

There’s also the argument that different people’s emotions are expressed differently depending on their cultural upbringing and societal background. How do you account for that bias?

That’s a wonderful question. Thank you.

So from a theoretical point of view, what we had in mind at the very beginning was that everything that you can control, whether it’s your choice of words, the way you pronounce things, the volume, all of that should be ignored. They’re not even worth taking into account, because you can control them. They have zero value.

Then we’re talking about all the ingredients that you can no longer control. And then apparently, we are all the same: We are all humans, the same animal. We all have the same brain activities and our brain is built very, very similarly, if not identically.

How do you think you are different from your competitors?

We are not AI-based. Everybody’s approach today is to bring in actors who express various emotions in different ways in different scales and say, okay let’s train these emotions. And let’s use a model to train these emotions. We work in a completely different way. We actually worked from the ground up.

When you take an AI and you teach it to recognise all these expressed emotions, the moment the system makes a mistake and classifies an actor expressing anger as angry, that’s the day I know I have to throw everything out and go back to the drawing board.

If I’m fooled by actors, how can I stay true to the true purpose of the system? That’s why we are dealing with genuine emotions.

Don’t you think that there are certain things, when people intend to hide them, that they should stay hidden? How do you respond to people saying that this is an infringement on privacy and freedom?

It’s always a challenging question and, of course, everything has to be done according to the laws and according to what is applicable and what is allowed and fair.

The thing is, if I, as an employee, get the chance to say how I feel about my boss and I say, ‘Well, I love it, I think he’s a great boss. Okay, I did my role.’ But deep inside, I know that the boss is awful and something picks that out from my voice.

And not just from my voice, but maybe from a few other people who did the same. They also said that the boss was magnificent, but they all felt differently. Don’t you think everybody wins except for this bad boss?

Well, what my argument is, if a person wants to hide something, don’t they have the right to hide something?

You always have the right to refuse a test, you always have the right not to participate. Anyway, if you don’t want to lie, don’t lie.

But if you do want to lie should you not be allowed to lie?

Listen, if I’m the employer and I pay your salary, I want to make the best use of my money to achieve the best productivity (and) the best environment I can. To do that I need to base my decision on knowledge, and the better the knowledge I have, the better my decisions can be.

You said that a person should always have the right to refuse the test, which means that they should always know that it is taking place and that their emotions are being monitored by technology, but consumers don’t always know this. How would you argue for that?

I think it’s better when people know what is going on and they should be aware of the consequences. But again, you know, you’re working with Facebook, you’re sharing your life with everybody, you’re sharing your most intimate moments with family, with everybody.

There are so many things that monitor you. The way you write, everything you type, everything you say, how quickly you type on your keyboard. The thing is, it’s not about the technology that is being used, but how it is being used.

What you’re saying is it’s not the technology that is bad, it’s the people behind the technology?

Right now your cell phone knows more about you than what you’ll ever know. Let alone emotion detection, it knows what you like and don’t like and knows what time you wake, whom you like, whom you speak with, whom you don’t speak with.

It knows everything and knows where you are every second of the day. You think you have privacy, think again. Today, I don’t think anybody in the world has any sense of privacy.