June 17, 2020

ESET CTO: AI needs “a human involved” to be an effective cybersecurity tool

By Ellen Daniel

Artificial Intelligence is most effective in cybersecurity with “a human involved”, according to the CTO of internet security company ESET.

Speaking at the Slovakian cybersecurity company’s Virtual World event, ESET CTO Juraj Malcho explored the ways in which AI and Machine Learning are impacting the security field, but also the challenges they present.

Malcho highlighted the fact that AI and machine learning can have important applications in cybersecurity, enabling researchers to find the “needle in the haystack” when using AI to analyse samples or attacks, and can be used to rapidly find common traits in different samples of malware and detect different variations of attacks.

He explained how ESET was able to detect 7.7 million Emotet attacks, and identify 3 million attacks that had common traits using machine learning.

It was also used to help ESET find the first UEFI rootkit “in the wild”, which could then be used to built tools to protect customers from this type of attack.

However, Malcho highlighted that AI and machine learning have their limitations in a cybersecurity context, and is wary of companies that “claim they have magic solutions.” He explains that ESET’s detection technology is made up of several “layers”:

“Our detection technology is based on several layers and the idea is that if one fails, there’s going to be several others that can step in and still prevent the attack should one layer be breached. Machine learning detection is not accurate but it is very fast so it is a good combination and it’s a great augmentation of detection technology.”

He highlighted the fact that detection is best “if you have a human involved”, with AI and machine learning “working hand-in-hand” with other detection methods.

AI limitations in cybersecurity

However, Malcho points out that the deployment of AI can be limited by the capacity of computing systems.

“There are some problems that you have to deal with. First of all it’s the numbers…the problem here is math is a theory. When you start feeding it with data, you might find out that you don’t have the capacity in your computing systems to process all the data. To give you an example, our sample sets are around three petabytes, which is somewhere around four billion samples,” he said.

“If we’re talking about endpoint detection and response, just one single machine can generate so many events that they will be basically not processable by current computing systems. So what you need to do is have a hybrid approach, pre-select the samples…and then train your models.”

“A tool for the bad guys”

However, while AI is increasingly utilised by those in the cybersecurity space, it is also a “a tool that we have at our disposal as much as the bad guys”, with AI accelerating the distribution of spam, phishing and misinformation.

Malcho referred to the 2016 DARPA Cyber Grand Challenge, the world’s first all-machine cyber hacking tournament, in which seven automated systems competed to identify flaws and creating and deploying patches without human intervention.

He explained that one way this is done is in making phishing or spam emails look more realistic.

“Machine learning is used a lot in automated translation. We are using it in our products as well. While spam and phishing looked really funny back in the day, and got a little bit better years later, today automated translation starts to look really good. you can really trust the text,” he said.

He also warned that AI has the potential to rapidly accelerate the distribution of misinformation.

“The trolls know how to misuse the algorithms of social media to amplify the attack and influence the opinions of people. That’s one thing that’s happening and it’s already bad enough. But what if this was automated so it’s not an army of trolls but it’s a system which is automated. It’s not only going to send phishing, but it’s going to do vishing, it’s going to call you and it’s going to use the voice of your favourite actor, let’s say,” he said.

“That’s a problem that we might be running into because suddenly targeted attacks, something that you might execute against the executives of a company, you can do against anyone if you have automation. And that could be dangerous.”

Moving forward, Malcho explained the importance of recognising the limitations as well as the possibilities of AI.

“We have achieved applications that work very well in well-defined, isolated environments. That’s your chess game, your Go game, which have rules and are a closed universe. When the system understands where it can go, it’s very good in finding all possibilities which are beyond the reach of humans. But the challenge is how we can extend this world if we want to move towards a general AI,” he said.

“AI without data is just beautiful maths. Data without AI is basically a bunch of ones and zeros. A waste of storage space…when the perfect combination of these elements is achieved and when properly validated data is fed into properly designed systems, then a euphoric moment is created.”

Read more: Cybersecurity teams have “inflated” confidence; taking longer to detect threats.

Verdict deals analysis methodology

This analysis considers only announced and completed big data deals from the GlobalData financial deals database and excludes all terminated and rumoured deals. Country and industry are defined according to the headquarters and dominant industry of the target firm. The term ‘acquisition’ refers to both completed deals and those in the bidding stage.

GlobalData tracks real-time data concerning all merger and acquisition, private equity/venture capital and asset transaction activity around the world from thousands of company websites and other reliable sources.

More in-depth reports and analysis on all reported deals are available for subscribers to GlobalData’s deals database.

Topics in this article: ,