Healthcare disparity represents a significant challenge within health systems worldwide.
It encompasses the differences in access to, quality of, and outcomes from healthcare services experienced by various populations, often influenced by factors such as socioeconomic status, race, ethnicity, geographic location, and education.
These disparities can lead to preventable diseases, higher morbidity and mortality rates, and overall poorer health outcomes for marginalised groups. Addressing healthcare inequality is crucial not only for promoting social justice but also for enhancing the overall efficiency and effectiveness of healthcare systems, ensuring that all individuals receive optimal healthcare regardless of their background or circumstances.
How AI can advance healthcare
The exponential rise of AI has brought the promise of greatly improved healthcare. AI has the potential to increase healthcare accessibility by:
- Using telemedicine and virtual health assistants to enable remote consultations and monitoring for underserved populations in rural or low-income areas with limited healthcare facilities.
- Enhancing personalised medicine thanks to its ability to analyse vast amounts of data. This will allow treatments to be tailored for individual patients based on their unique genetic, environmental, and lifestyle factors, thus improving health outcomes, particularly for marginalised groups.
- Enhancing diagnostic accuracy and assisting healthcare providers in diagnosing conditions quickly and accurately. This is especially beneficial in communities lacking specialists.
- Optimising resource allocation by predicting healthcare needs and identifying at-risk populations. This will ensure that resources are directed where they are most needed. It can also streamline processes to improve efficiency, making healthcare more affordable for low-income individuals.
- Analysing the social determinants of health to identify patterns contributing to disparities, enabling targeted interventions.
AI’s inherent bias
However, challenges such as data bias and the digital divide suggest that AI could potentially worsen existing inequalities.
A recent paper published in Nature Medicine examined nine large language models and uncovered a variety of sociodemographic biases. The study found that these models were six to seven times more likely to recommend mental health assessments for cases identified as belonging to LGBTQIA+ subgroups.

US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataFurthermore, cases labeled as having high-income status were significantly more likely to receive recommendations for advanced imaging tests compared to those from low- and middle-income backgrounds, which often received suggestions for only basic or no further testing. This disparity lacked clinical reasoning and appeared to mirror real-life healthcare discrepancies.
A significant source of bias in AI stems from the data utilised for training. This often results from insufficient representation of certain groups—a well-documented issue in healthcare. For example, women and minority ethnic groups are often underrepresented in clinical trials that evaluate drug safety and efficacy—trials that ultimately inform first-line therapy choices and guide clinical decision-making. Additionally, the prevailing digital divide exacerbates this bias.
The International Telecommunication Union (ITU) reports that only 27% of the population in low-income countries is estimated to have internet access, compared to 93% in high-income countries. Consequently, not only is the deployment of AI more challenging, but the inclusion of patient data from these regions in AI training is also problematic, particularly when much of this data may not be digitised.
What can be done to support AI integration?
Efforts have been made to expand and support the use of AI in developing and low-income countries. OpenAI has provided $150,000 in technical grants to non-profits in India. This has helped the Myna Manila Foundation to produce Myna Bolo, a chatbot offering 24/7 reproductive health guidance to women in underserved communities.
In Malawi, 19 out of 1,000 babies die during labour, or in their first month of life. PeriGen, in collaboration with Malawi’s health ministry and the Texas Children’s Hospital, donated software that monitors vital signs during labour to provide early warning signs of problems. The use of this software has led to an 82% reduction in the number of stillbirths and neonatal deaths over the last three years.
To effectively harness AI in addressing healthcare disparities, it is essential to limit the biases inherent in AI systems by ensuring diverse and representative data during training. Understanding the limitations and biases that persist in AI applications is crucial for developing equitable solutions that can be used with caution to advise clinical decisions.
Deploying AI technologies in countries and communities that need them most is vital to bridging the healthcare gap. By focusing on these areas, the potential of AI can be used to improve health outcomes for marginalised populations, fostering a more effective healthcare system worldwide.