Despite the UK government’s plans to invest £250m in a National Artificial Intelligence Lab for the National Health Service (NHS), new research has found that public concerns over accuracy, cybersecurity and the limitations of AI-led chatbots could hinder innovation in the healthcare space.
Titled Acceptability of Artificial Intelligence (AI)-led chatbot services in healthcare: A mixed-method study, the study looked at public attitudes towards AI in healthcare, particularly the public’s willingness to engage with chatbot services.
The study was led by researchers from the University of Westminster and also involved University College London and the University of Southampton.
Researchers found evidence of “AI hesitancy”, with a large portion of the public reluctant to use these new technologies, especially when seeking help and advice regarding serious illnesses.
This is based on 216 recorded interviews which explored demographic and attitudinal variables, such as acceptability and perceived utility of AI technology.
The majority of participants were unable to understand the technological complexity of chatbots, and did not understand how they were capable of responding to medical enquiries.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below formBy GlobalData
While most participants believe chatbots could accurately provide general health advice, the majority felt that the technology hasn’t been developed enough to provide accurate diagnoses.
Many feared that these systems would be unable to correctly identify symptoms, and others believed that miscommunications between the chatbot and its human user could be an issue as a result of an ability to accurately describe of health issues.
While some recognised that chatbots did allow them to comfortably disclose more intimate information related to their health, many felt that chatbots lacked the empathy to appropriately deal with certain issues, particularly those related to mental health.
Likewise, there was also a fear that disclosing such information to an AI system could pose a cybersecurity risk.
The study concludes that patients are concerned that replacing human professionals with chatbots could potentially reduce the quality of healthcare services.
“Our research shows that at present a large proportion of the public is hesitant to use AI-led tools and services for their health, particularly for severe or stigmatised conditions,” lead researcher Tom Nadarzynski of the University of Westminster, said.
“This is related to a lack of understanding of this technology, the concerns about privacy and confidentially, as well as the perceived absence of empathy that is vital for patient-centred healthcare in the XXI century.”
AI investment is welcomed, but public consultation needed
“We welcome the government’s initiative to set up ‘an Artificial Intelligence Lab’ within the NHS framework in England,” Nadarzynski said. However, he added that the concerned raised by this study first need to be addressed in order for AI technology to make a “real difference” in healthcare.
“We emphasise the importance of involving the public in the design and development of AI in healthcare,” Nadarzynski said.
The study concludes that in order for AI-led health chatbots to improve healthcare, developers must employ “user-centred and theory-based” approaches that address all patient concerns, and that offer an optimised user experience.
The study has been published in the peer-reviewed journal Digital Health.