Monitoring the online activity of hacker communities is a goldmine of information for those wishing to avert cyber crimes. The problem is that doing so can often involve a vast quantity of data, impossible for a human to analyse.

Cybersecurity company Cyxtera has developed a platform, known as Brainspace, that uses artificial intelligence to monitor Twitter feeds to gain a better understanding of how cyber-criminals operate.

Verdict spoke to Cyxtera’s chief cybersecurity officer Chris Day on how the company is utilising social media to stay one step ahead of hackers.

How does it work?

Brainspace uses artificial intelligence to analyse millions of tweets in order to understand how different groups of hackers are communicating. It works by using a machine learning model to consume data, makes decisions about how ideas are clustering, and then gives the human operator an interface to interact with that data.

Day explains how the platform was able to monitor the Twitter traffic around one known Chinese hacker:

“We wanted to see, just looking at Twitter traffic, if we could uncover a network of Chinese-speaking hackers around him that we didn’t know about. Through patterns of life activity, could we better understand the network around him?

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Using Brainspace, it was possible to make sense of a quarter of a million tweets, identify hackers that were communicating with each other and what they were talking about. These insights could then be acted upon.

Day explains how this information can be used to understand the behaviour of hackers:

“They might have their work, they’re a hacker for the Chinese military, but they also have a life. Often that life spills over. So that allows us to understand them better. In this case we’re continuously harvesting Twitter traffic and some other data sources. What are they talking about? Are there new concepts coming up? New topics? Do we see an increase in chatter around certain topics?”

This information is useful as it enables the company to predict what sort of attacks might be coming soon based on what is being discussed in hacker communities.

Day believes that this enables the company and its customers to stay ahead of potential threats:

“We can do things like sentiment analysis, sentiment monitoring on social media to see if people are talking about us or our customers in negative ways. Is that increasing? Is there a call to action? So things like that might give us an early warning that something’s coming. We might better understand who’s behind those things and might give us the opportunity to take steps as well. Maybe we contact law enforcement. So it is a pathway to action for us.”

Identifying bots

AI has also been utilised to identify bot activity. Unlike traditional cyberattacks, this type of activity is not designed with data or financial gain in mind, but exerting influence.

Day explained how Brainspace was used to identify a group of Russian bots through a code representation a human would not have picked up on:

“There is no apostrophe neither in the Russian language or on a Russian keyboard. So a Russian programmer who’s building a bot that’s going to tweet in English, what they do when they use an apostrophe is they use a code representation. So visually if you were looking at it as a human you wouldn’t see that as it renders as an apostrophe. So that then became an indication of Russian bot-ness and then we could use that indicator and go back into Twitter and search for users that are using that indicator. That allowed us to find a whole other set of Russian bots that are still active.”

“We did some work showing how these Russian bots are being used to influence both the extreme right and the extreme left in US politics right now. We were able to show that some of these bots had been re-tasked from previous activities, from fraud in the past to political influence. It really helped to unravel that network a bit and again [artificial intelligence] will do it very rapidly in a system like this.

Without AI, this would have been extremely difficult to detect, Day explained.

“[without machine learning] you’d never notice the distinctions, the differences. You’d never be able to do the rapid data reduction, the rapid processing, so in our opinion that’s one of the powers of this platform.”

Applications

Day believes that this technology could be used to prevent high-profile cyberattacks such as the recent British Airways data breach in which 380,000 card payments were compromised. The platform can be used to track user behaviour inside a network, which is useful when a hacker is using stolen credentials.

Day told Verdict:

“One area this platform is useful is insider threat. When your typical adversary is inside a network, especially if they have stolen credentials, they look like an insider doing bad things. So a system like this can be very helpful in cases where the adversary has spent time in the network, has stolen credentials and is potentially moving around in the network impersonating somebody, privilege escalating and potentially grabbing data sets which is what I believe happened in the British Airways attack.

“This system is very good at finding anomalies, whether its behavioural, or looking at network traffic anomalies.”

However, although AI is at the heart of Brainspace, Day believes that it is vital that human security analysts remain a key part of detecting and acting on cyber threats:

“All the AI and ML [machine learning] that we do at Cyxtera, we want it to be optimal for humans teams. So we want to build nice interfaces for an analyst operator to interact with, so that we can leverage the things that humans are really good at, judgement and subject matter expertise, and let humans leverage what computers do best which is high-speed processing.”

The future of cybersecurity

Although this type of system can be used to anticipate cyberattacks, Day believes that organisations must become more vigilant in protecting their systems from breaches:

“Companies need to start re-thinking their architectures.

“Part of the problem with cyber today is it’s very all or nothing. Either you’re not breached or you’re breached and when you’re breached it becomes a wildfire event and you end up on the front cover of the Wallstreet Journal because 150 million customers had their data stolen. It’s ridiculous that every phishing attempt that is successful turns into these massive breaches.

“We need to start thinking about architectures that have more resiliencies built in where a compromise doesn’t turn into a full-scale breach.”

Read more: This company is using AI to beat pollsters and provide more accurate political forecasts