Whether dealing with a public or private provider, today’s health care consumers may not be surprised to find themselves conversing with tech: with chatbots and smart assistants. While some surgical robots can be found in hospitals, it’s much more common in a healthcare context to encounter digital robots focused on communication. The tech that drives these healthcare bots is evolving to meet the challenging demands of modern healthcare customers – and the thorny issues of privacy.
Perhaps most surprising to the newcomer is the fact that automated or robotic technology and healthcare have a rather long history together. The early 20th century, for example, produced Mademoiselle Claire, a machine in Paris which handed out surgical instruments to doctors. The post-war era gave us “living” medical dummies such as Sim One, designed for use in anaesthetist training, and Lumena, a robotic anatomical model created to educate about the female body. Speaking with a woman’s voice to students, Lumena was probably the first talking robot in the medical space.
Like their ancestors, today’s healthcare simulacra don’t pose as doctors and surgeons but as assistants and emergency operators. Being mainly digital, they are far removed from the clunky humanoids of yesteryear and more commonly powered by artificial intelligence (AI) than cogs and gears. They’re also likelier to be found in pop-up chat boxes or apps than the space of the emergency room.
Smart assistants go viral in health tech
The Covid pandemic has boosted demand for health tech solutions around the world. Over three-quarters of US specialists increased use of telemedicine during the pandemic, according to GlobalData’s thematic research. Besides being able to Zoom with human doctors, patients with chronic conditions could speak to Siri-esque caregivers from Careangel on the phone when face-to-face contact proved difficult in lockdown.
AI-powered chatbots like Babylon Health’s Symptom Checker and France’s Covidbot were developed meanwhile for contactless screening of coronavirus symptoms and to help answer questions from the public on the illness. These not only resulted in reduced workload for hospital staff and better patient flow, but also lowered hospital visits and risk of coronavirus infection.
In another example of patient-facing assistance, Nuance Communications, which Microsoft acquired in a $19.7bn deal in April, also provides a Covid-19 Vaccine Bot and Vaccine Assistant which answer questions and look up availability and eligibility criteria on coronavirus vaccines. The UK based speech-recognition company is best known for having provided the speech recognition engine that powers Siri, the smart assistant that talks to Apple customers around the world. Nuance, however, mostly makes its money not from bots helping consumers, but through enterprise AI tools that transcribe doctors’ notes and visits, along with customer service calls and voicemails.
Making way for ‘Monstersoft’
“Physicians, nurses and everyone involved not having to take manual notes allows them to treat more patients and spend more time on care,” says GlobalData analyst David Brown.
Brown, who calls Microsoft “the big monster that just moved into the clinical side of the space”, believes the Nuance acquisition will “greatly bolster Big Tech’s position in provider-based conversational AI with cloud capabilities.”
GlobalData itself forecasts in a recent report on AI in healthcare that the market for AI platforms for the entire health industry will reach $4.3bn in three years’ time. That’s 8.2% of the $52bn market total for AI platforms forecast for 2024.
While most of this is driven by solutions for healthcare professionals, the patient-centric market also offers the automation of the more preliminary kind of work carried out by healthcare staff, and is thus considered potentially lucrative and ripe for expansion.
Patient-facing smart assistants are more often than not found in chatbot and app form, usually on personal devices as opposed to digital touchpoints in clinics and hospitals. Smartphones offer AI-powered text chatbots to check your symptoms (e.g. Ada) or help with mental health (Wysa, Woebot). On the voice-recognition side, it’s long been possible to access the Mayo Clinic First Aid digital assistant via Amazon’s various Alexa devices.
Woebot at work (Credit: Bloomberg via Getty Images)
The tech behind these artificial health providers is the same as that of any virtual assistant, reliant upon input and intelligent algorithms. Like all other chatbots, it’s a case of call and response, with the human party giving information to the inhuman which in turn will hopefully respond with the right info. What makes conversational AI in healthcare unique, though, is the challenge on the patient side, mainly arising out of privacy concerns and the technology’s success (or otherwise) in understanding the human.
Smart and sci-fi
When thinking of robot assistants in the future, people tend not to picture today’s smart speakers but sleek white mechanical shells. Examples include the killing machine of I, Robot or the far cuter EVE from Pixar animation Wall-E.
Real-life robots like this do already exist, but a less high-profile form factor is the one modelled after the hopping lamp seen in the logo animation before every Pixar film. And this one isn’t made to assist the future descendants of Generation Z, but today’s growing generation of elderly patients.
ElliQ from Israel’s Intuition Robotics is a moving, speaking lamp that resembles Pixar’s Luxo Jr. The device, as Intuition Robotics’ CEO and co-founder Dor Skuler tells Verdict, was initially created with the goal of minimising the impacts of loneliness and social isolation amongst older adults, while empowering their independence at home.
“In terms of ElliQ’s physical design and shape, we actually took a lot of inspiration from the Disney Pixar lamp, an object that embodies and evokes our emotions through movement alone,” says Skuler.
“We actually took a lot of inspiration from the Disney Pixar lamp, an object that embodies and evokes our emotions through movement alone.”
This empathetic look and feel helps “sell” ElliQ to elderly users as less of an alien and more of a carer by their side that reminds them to take their medicine. Ironically, an app may have been an even harder sell than a futuristic-looking robot, according to Skuler.
“Apps are great for digital natives or those that already use them frequently, but many of our users are in their 80s and 90s, and we wanted to accommodate their tech abilities,” he explains. “We wanted ElliQ to be intuitive and easy for even the least tech-savvy older adults to operate while giving them a sense of company at home. In order to do that, it was imperative that it take on a physical form, proactively engaging with users via multiple modalities (speech, movement, sound, LEDs, on-screen UI) for a richer, more seamless experience.”
That experience may not be seamless if the bot is unable to understand older adults’ sometimes slower, more confused ways of talking. Intuition Robotics deployed Natural Language Processing (NLP) and Speech to Meaning (STM) technologies to help the robot decipher the meaning and context behind users’ speech.
“ElliQ does have a wake word in her name that users can say whenever they’d like to begin interacting with it – i.e., ‘ElliQ, let’s do sleep relaxation’ – though it proactively interacts with them as well, based on our algorithms understanding situational context.”
In addition to responding via voice, Skuler says, users often have the ability to reply to ElliQ by selecting from an on-screen menu on her tablet – very handy when speech may be out of the question due to age and health.
The ElliQ bot is an interesting indication where smart assistants in healthcare are heading. While chatbots mainly cater to a large audience by answering universal questions, making them digital first responders of sort, more and more apps are being designed for different kinds of consumers who may not benefit from a one-size-fits-all approach.
Marketing to the elderly makes sense. GlobalData analyst Nicklas Nilsson believes “voice assistants promise a lot of support for the ageing population which continues to increase”. This rising elderly population poses a significant challenge for most countries due to lack of sufficient caregivers.
“According to the WHO, the share of elderly (aged 60 and over) will increase from 12% in 2015 to 22% in 2050,” Nilsson adds. “Adoption of voice user interfaces among the elderly is already high (for those) with movement and dexterity problems, loneliness and memory loss. That demand will most certainly increase significantly going forward.”
Do we really want Big Tech by the sick bed?
There is also healthy demand for digital assistants in the privacy of one’s home as homes become smarter. As an example we have Voiceitt, an AI-powered speech recognition app for individuals with speech impairments. As CEO and co-founder Danny Weissberg tells Verdict, his company’s app “translates atypical speech to allow users to communicate in their own voice with loved ones, caretakers and others” (his emphasis).
Voiceitt translates atypical speech to allow users to communicate in their own voice with loved ones.
Voiceitt has recently expanded from phone into the smart home arena by integrating with Amazon Voice Services’ API. Backed by the app, Alexa can comprehend and complete a command given in the user’s own voice.
“A person with atypical speech can utilise Alexa to turn on and off lights, play music, turn on the television, etc,” says Weissberg.
The aim is to give Voiceitt users “newfound independence, helping improve their quality of life.”
This meant adapting the programming at the app’s core.
“We included phrases in the suggested training that our users find helpful with smart homes, including ‘what is the weather like today,’ or ‘turn on the TV,’” Weissberg reveals.
Voiceitt is also conducting pilot programs at care centres and hospitals around the world. This can be a thorny area, as medical details are very sensitive data.
“Every time you go to a doctor’s room and shut that door, you presume whatever you say to them is confidential,” GlobalData analyst Ed Thomas reminds Verdict. “If you leave your doctor’s surgery and then start getting ads on Google for pills related to the problem you’ve just seen your doctor about, you would have a right to be very upset.”
“What if they use that information to push certain insurance policies? That’s a huge invasion of someone’s privacy, but it’s potentially possible.”
The Voiceitt app itself is GDPR (General Data Protection Regulation), HIPAA (Health Insurance Portability and Accountability Act) and CCPA (California Consumer Privacy Act) compliant. It doesn’t require any private information from users beyond names and email addresses.
“Voiceitt’s machine learning algorithm, which helps our advanced speech recognition technology continuously improve its capabilities, does collect data,” Weissberg stresses, “but it is completely anonymised, mitigating any privacy risk.”
Going back to ElliQ, it’s worth noting how the bot stands out from other voice assistants in the health market by being device-driven over app, with the key tech being proprietary hardware of its own. This isn’t a solution designed for Alexa or Apple HomePod, which also lets ElliQ nicely sidestep concerns over data privacy.
Smart assistants in health tech – and on the edge
ElliQ’s unique “shelling of the ghost” might be one future for smart assistants in healthcare, especially for hospitals serious about using smart speakers in their rooms. Simple solutions are also found on the “edge”, where computation and data storage is kept close to the location where it is needed as opposed to deep in the cloud.
ElliQ’s unique “shelling of the ghost” might be one future for smart assistants in healthcare, especially for hospitals serious about using smart speakers in their rooms.
“Processing the information at the edge is frequently mentioned as a way to preserve data privacy,” Nilsson explains. “Federated learning could be another solution to help in these instances: In short, a device downloads a model from the cloud, improves it by learning from data stored in its memory, and then updates the model based on the localised data.
“Only the updated model is sent back to the cloud, where it joins up with contributions from other devices. The benefit of this technique is that the user’s experience improves without them having to share personal data.”
Another view from GlobalData comes via Brown, who brings up the Obama-era 21st Century Cures Act. Designed to put patients in control of their own health information, its promotion of interoperability takes power out of the hands of a single provider and lets consumers choose their own tech to assemble and read their records.
“With the Cures Act, patients are being put in control of their own data,” says Brown. “It could certainly be powerful to allow patients to carry ‘medical assistants’ in their pockets that allow them to query their health data, ask for specific options on treatment, and ask to be interfaced with appropriate providers. Also, when visiting providers, to allow these conversational AI bots to prompt patients to ask effective questions, store actionable items, translate and more.
“This could be a very powerful thing for a patient to carry around with them.”
Find the GlobalData Artificial Intelligence in Healthcare – Thematic Research report here.