Artificial intelligence (AI) technologies that are being deployed to help inform the general public about the coronavirus; provide diagnostic support and track the spread of the virus are inconsistent in quality, and in the worst cases are putting lives at risk, according to analysis by the World Economic Forum.

According to a report by Eddan Katz, AI project lead, and Conor Sanchez, project specialist at the World Economic Forum, while governments and healthcare providers have rushed to adopt AI technologies to support efforts against the coronavirus, many of the technologies that have been deployed are not up to the task, causing severe harm.

Concerns raised over government use of AI to tackle coronavirus

Katz and Sanchez highlighted the growing use of chatbots by governments to keep citizens informed without overwhelming resources, but warned that “many of these technologies lack tools and processes to minimise risk”.

“Measures are typically not in place to ensure information accuracy, implement privacy protection or effectively train models agile enough to track the most current knowledge about this rapidly changing disease,” they wrote.

“This can result in ineffective information dissemination. At worst, it leads to a misdiagnosis of symptoms, potentially putting thousands or more lives at risk.”

Katz and Sanchez have urged governments to develop plans to identify problems with such chatbots, and determine just how accurate they are – an action that is currently lacking in many parts of the world where this technology has been deployed.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

AI-based tools to track the spread of the virus are also being deployed by governments, largely through private sector partnerships, including through the use of contact tracing apps.

However, there are concerns about privacy, which Katz and Sanchez argue have not been adequately tackled, and without action will cause problems down the line.

“The privacy concerns over how this data may be used in different contexts and by whom suggest that asking the right questions now can help anticipate the inevitable data protection issues later,” they wrote.

“Establishing processes for thorough documentation can help to assess problems when they do occur and ideally help avoid the harms from taking place.”

Concerns raised over use of AI in coronavirus diagnostics

Within the healthcare sector, AI is increasingly being deployed to diagnose the coronavirus by spotting “infections either through people’s voices or with chest x-rays”. And they have proved to be highly affective at reducing the burden on healthcare workers.

However, Katz and Sanchez warn that “there is little guidance on how to evaluate how well the tools work”.

“As a result, facilities risk making matters worse by incorrectly diagnosing patients,” they wrote.

This issue needs to be tackled by governments, they argued, by providing clear frameworks about how AI is introduced into the diagnostic environment and thoroughly tested before such technologies are used on patients.

“One recent report analysing dozens of computer models to diagnose and treat Covid-19 exposed how all models had been trained with unfit and insufficient data,” Katz and Sanchez wrote.

“Such assessments should force officials to ask critical questions about risk-mitigation processes, performance and accuracy, and ensure there is robust external oversight of systems through an independent third party.”

Covid-19 highlights ongoing need for AI regulation

AI technologies can bring powerful benefits to the efforts against the coronavirus, particularly by expanding the resources of healthcare workers and government agencies.

However, according to Katz and Sanchez, there has for some time been a strong need for effective regulation; the current crisis has just brought it into sharper focus.

“The global crises have underscored the need for responsibility in innovation and the ethical use of technology,” they wrote.

“As we embark on this Great Reset, the rules and standards of governance norms are not yet in place. The window of opportunity is brief to establish a set of actionable procurement guidelines that enable good decision-making for the future.”


Read more: Coronavirus: NHS contact-tracing app will fail without trust, academics warn