A healthcare algorithm used to decide whether millions of US patients received access to a high-risk care management programme demonstrated “significant” racial bias against black patients.

The software program, which is reportedly widely used in the healthcare industry, was found to routinely choose healthier white patients over black patients with poorer health for access to extra care.

The algorithmic bias was discovered by researchers from the University of California Berkeley, Chicago Booth School of Business and Partners Healthcare in Boston.

Bias in the system stemmed from the variables used to predict patient risk; those that had higher healthcare bills were deemed more at risk.

“The algorithms encode racial bias by using healthcare costs to determine patient ‘risk,’ or who was mostly likely to benefit from care management programs,” said Ziad Obermeyer, acting associate professor of health policy and management at UC Berkeley and lead author of the paper.

“Because of the structural inequalities in our healthcare system, blacks at a given level of health end up generating lower costs than whites. As a result, black patients were much sicker at a given level of the algorithm’s predicted risk.”

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The researchers established bias after comparing the algorithm-predicted risk score of 43,539 white patients and 6,079 black patients enrolled at an unspecified hospital with other health metrics and biomarkers. The findings were published in the scientific journal Nature.

Biased healthcare algorithm: “Algorithms by themselves are neither good nor bad”

The researchers removed bias from the healthcare algorithm by tweaking it to use other variables in its prediction. They estimate that fixing the algorithm could more than double the number of black patients automatically admitted to the healthcare programs.

“Algorithms by themselves are neither good nor bad,” said Sendhil Mullainathan, the Roman Family University professor of computation and behavioural science at Chicago Booth and senior author of the study.

“It is merely a question of taking care in how they are built. In this case, the problem is eminently fixable – and at least one manufacturer appears to be working on a fix. We would encourage others to do so.”

The biased healthcare algorithm is the latest example of an AI system reflecting human bias. Facial recognition systems, including ones used for passport renewals, have consistently struggled to recognise black people.

Facial recognition systems require large training datasets to work accurately, but there are often disproportionately fewer images of ethnic minorities in these datasets.

Algorithmic bias can be avoided by auditing algorithms while they are being developed. This could be undertaken by an ethicist who looks out for unintended consequences that could arise from an AI system.

Noel Sharkey, emeritus professor of artificial intelligence and robotics at the University of Sheffield, told Verdict that private companies should take more responsibility to ensure their AI systems are not biased.

“This is the same repeated pattern of racial bias that we are finding with algorithmic deciders all over the planet. It is becoming so common that it is clearly time to stop it with hard regulation,” he said.

“Private companies are making and selling these deciders to public service providers and others without adequate testing. The service providers then use them on trust. But there is no room for trust here.

“A couple of academics found the flaws in the care prediction algorithm and found a way to fix it. Why could the company not have done that in the first place? They should be accountable and liable for the suffering that they have already caused.”


Read more: Businesses must monitor AI bias more closely: CBI