Almost anything can be hacked. So, when Neuralink announced that it is pursuing human trials for its brain-machine interface, its security was always going to fall under scrutiny.
The company, founded by Elon Musk in 2016, aims to help people with “a wide range of clinical disorders”, such as those who have been paralysed, by allowing them to interact with a computer using their mind via a brain implant.
Such brain-machine interfaces have been around since the mid-2000s, albeit with varying degrees of sophistication. But Neuralink appears to have built on existing research in the field, bringing in leading neuroscientists to work in relative secrecy until Musk revealed Nueralink’s progress during a recent presentation to the California Academy of Sciences.
During this presentation, Neuralink said it is looking to start human trials in 2020, something that is likely to face many hurdles. Beyond the technological difficulties, it will need to convince regulators such as the US’ Food and Drug Administration (FDA) that the device is safe for human use.
Part of that will include Neuralink’s security, the details of which – so far at least – are scarce. In its white paper, there isn’t a single mention of the word ‘security’: the focus is on the how, not the what if.
What we do know, thanks to Neuralink’s president, Max Hodak, is that “it’ll be controlled by an iPhone app. You won’t have to go to a doctor’s office and have them have a programmer configure it” he said during the presentation.
We also know that it will use Bluetooth to connect to your device. And there will be a battery and radio worn behind the ear, rather than those components being in the implant.
“The interface to the chip is wireless, so you have no wires poking out of your head. That’s very important,” said Musk.
Neuralink security: Bluetooth vulnerabilities
So, what does this mean from a security perspective? First, it is no secret that Bluetooth has its security problems. In opening up a channel for two devices two communicate, Bluetooth also opens the door for potential attacks to occur when security standards are weak. As such, Neuralink will have to ensure its Bluetooth protocols are kept up to date and have the highest levels of encryption.
Musk also mentioned that Neuralink will be available “in the App Store”, which could run the risk of copycat apps that compromise the brain implant.
Alastair Beresford, reader in computer security at the University of Cambridge, said that more information on how such a device might work would be needed to fully know the risks, but “that it is impossible to write bug-free code, and therefore good security for internet-connected devices requires prompt patching when vulnerabilities are discovered”.
He added that “if the idea is more than simply reading from the brain, but also writing new data into the brain, then this is likely to be a big risk”.
Security in medical implants has been overlooked in the past. Kaspersky Lab has previously warned that malicious hackers could compromise implants simply by exploiting weak standard passwords that are common with internet-connected medical devices. Having secure passwords can be made trickier still given that physicians might need quick access in an emergency situation.
Neuralink security failures could be “disastrous”
And even medical devices that have made it to market have been found to have vulnerabilities. Security researchers at cybersecurity firms Whitescope and QED Secure Solutions have previously shown how they could remotely disable an implantable insulin pump, as well as control the electrical impulses that are sent to the heart to regulate a patient’s heartbeat.
Whitescope’s Billy Rios, one of the researchers who demonstrated these medical device vulnerabilities, told Verdict that he’d need to examine Nueralink’s system in order to comment on the specifics, but warned connectivity would likely be the biggest Neuralink security risk:
“From a technical standpoint, I’d see the remote connectivity as the biggest risk,” he said. “How is authentication handled, how are software updates handled, how is the communications between the device and the phone secured… these are the types of questions I’d be asking.
“If someone were to be able to take over the device, the consequences could be disastrous.”
One possibility is that attackers could inject malware into the device’s code, causing someone to lose their memory, inflict pain or cause paralysis. Or, perhaps our memories, the most personal data of all, could be at risk of theft a la Inception.
Neuralink’s implant, like any technology, is amoral. It is humans that will attempt to subvert it for nefarious purposes, and humans that will have to ensure its defences are up to scratch.
But it should still give us pause, especially when AI could potentially be altering how our brains function.
Security from “the ground up”
Oliver Tavakoli, chief technology officer at cybersecurity firm Vectra, compares Neuralink’s potential to change our bodies to gene-editing tool CRISPR, in that both “can be good and for evil”.
“Our experience with CRISPR has shown us that we’re not prepared to have wide-ranging and serious discussions of this sort yet. But even beyond the ethics of legitimate use, the fact that we potentially enable third parties to use this technology on individuals without their permission should give us pause.
“Over the past few decades, we have created computer architectures, operating systems and applications without considering security first. We can see where that has led us in our daily news cycle. Interfaces like Neuralink should be designed from the ground up with fail-safes and peer reviews of the technology to ensure a high level of safety.”
Notably, Neuralink is yet to be peer-reviewed, which is seen as a crucial step in a technology progressing to the next phases of trials.
Verdict asked Neuralink about the steps it would take to ensure Neuralink security is of the highest standards, but did not receive a reply at the time of publication.
The human brain is the most complex machine in the known universe. Pulling off its ambitious goals will be an impressive scientific feat, but one that cannot afford to overshadow a security-first approach, warned Rios.
“Obviously, anything that touches your brain has to be SOLID from a cybersecurity standpoint,” said Rios.
“I hope the Neuralink team has built their system from the ground up with security in mind, otherwise, they will be putting their patients at serious risk.”