Social robots are on the rise, and with the ability to easily gain our trust, they could easily be used to perform cybercriminal activity, such as heists that involve gaining access to secure buildings, according to research published today.
A joint research project by cybersecurity company Kaspersky and scientists at Ghent University has explored how the intersection of social engineering and robotics security could be abused by criminals – with concerning results.
Focusing on social robots, which are designed to interact with the public using either verbal or non-verbal communication, the research found that the inherent trust we place in such machines could easily be abused, particularly if they are hacked.
Robots and heists: Using social engineering to gain access to restricted areas
The research involved conducting several experiments using social robots, including gaining access to secure buildings. This is something often attempted in both digital heists, where criminals – and cybersecurity red teams posing as criminals – attempt to access to corporate networks, often to steal sensitive data, by gaining physical entry to a company’s premises.
In this experiment, a social robot was located near to a secure entrance of a building in the city of Ghent, Belgium. The building in question was occupied by multiple organisations, and required a swipe card to be used by all of those entering it.
The robot was instructed to attempt to gain access by asking staff if it could follow them through the door into the secure area. In this instance, 40% complied, giving the robot access by holding the door open for it.
However, when the robot was equipped with a pizza and branded with the logos of a popular chain, far more people let it in, apparently taken in by the plausible reason it could need to come in.
This may seem innocent enough, but if cybercriminals hacked a social robot, it could be used help in some kind of heist using exactly this method – it certainly wouldn’t be out of place in an Ocean’s Eleven-esque setting.
Using robots to get personal data
It’s not the only way the researchers managed to use social robots to make gains that would be beneficial in cybercriminal heists.
The researchers also used conversational social robots to engage with people, with a view to getting them to reveal personal information that could be used to reset passwords.
This was highly effective – with all but one participant, the researchers managed to extract sensitive personal details at a rate of one per minute.
In both methods, the information or access gained could be easily abused, making the issue of social robots’ natural trustworthiness a cause for concern.
“Scientific literature indicates that trust in robots and specifically social robots is real and can be used to persuade people to take actions or reveal information. In general, the more human-like the robot is, the more it has the power to persuade and convince,” said Tony Belpaeme, Professor in AI and Robotics at Ghent University.
“Our experiment has shown that this could carry significant security risks: people tend not to consider them, assuming that the robot is benevolent and trustworthy. This provides a potential conduit for malicious attacks and the three case studies discussed in the report are only a fraction of the security risks associated with social robots.
“This is why it is crucial to collaborate now to understand and address emerging risks and vulnerabilities – it will pay off in the future.”
Security of social robots
The research only explores a small number of ways in which robots could be used to facilitate digital heists or other forms of cybercrime, but in all cases there is a common denominator: social robots are often not terribly difficult to hack.
This is because unlike some other types of robotics with greater potential for immediate harm, security is deprioritised in the development process in favour of better social interaction – something that the research suggests should be reconsidered.
“At the start of the research we examined the software used in robotic system development. Interestingly we found that designers make a conscious decision to exclude security mechanisms and instead focus on the development of comfort and efficiency,” explained Dmitry Galov, security researcher at Kaspersky.
“However, as the results of our experiment have shown, developers should not forget about security once the research stage is complete. In addition to the technical considerations there are key aspects to be worried about when it comes to the security of robotics.
“We hope that our joint project and foray into the field of cybersecurity robotics with colleagues from the University of Ghent will encourage others to follow our example and raise more public and community awareness of the issue.”