High-performance computing (HPC) has played an important role in the fight against Covid-19. Powerful computing clusters have run models that help scientists understand the virus and how it spreads, make advancements in therapeutics, and even produce vaccines.
HPC is not the only technology involved in the fight against the pandemic. However, it has played an important underlying role in researching how the virus spreads and how it can be treated.
Without the computing power provided by HPC it would not have been possible to run the algorithms that power artificial intelligence (AI) or model how the virus spreads in the community and how it behaves once it finds a human host.
Covid-19 HPC Consortium provided powerful tools
Several initiatives designed to put cutting-edge technology in the hands of researchers popped up around the world. The broadest and most impactful was also the first to emerge, the Covid-19 HPC Consortium. As the pandemic was spreading across North America in March 2020, the US government launched an initiative to give scientists access to powerful computers that they could use to study Covid-19.
The consortium brought together US government agencies, academia, and technology companies, including Amazon, Microsoft, Nvidia, IBM, AMD, and Dell. Shortly after it was set up, the UK government and research bodies in the EU, Switzerland, South Korea, and Japan joined the consortium.
The consortium made some of the most powerful computer clusters in the world available to researchers looking at issues such as how the virus interacts with the human body on an atomic level and how it spreads from one person to another.
Focus delivered results
The focus of a wide range of actors on one goal – fighting the pandemic – delivered results. One international team of scientists from IBM Research and the University of Oxford used advanced machine learning, computer modelling, and experimental measurements to accelerate the discovery of what became the AstraZeneca vaccine.
In collaboration with the Lawrence Livermore National Lab in the US, Utah State University used IBM’s Longhorn computer to study how contaminated droplets are transported and settle within indoor environments, including hospitals. The research involved complex multiphase turbulence simulations that would not be possible without HPC infrastructure. The contribution of HPC machines allowed breakthroughs in research that would have taken years with less powerful computers.
Supercomputers and collaboration
One of the computers made available to the Covid-19 HPC Consortium is Frontera, developed by the University of Texas. Frontera is currently the ninth-fastest machine in the world, according to the bi-annual Top500 list, compiled by the University of Tennessee, Knoxville and the Lawrence Berkeley National Laboratory in the US, as well as the University of Mannheim in Germany. Frontera contributes 84 petaflops and over 58,000 nodes to the consortium’s computing resources. Matching this capacity would mean performing one calculation per second for 2,671,362,889 years.
IBM’s Summit is even more powerful. It currently ranks as the second-fastest computer globally, having lost the top spot to Fujitsu’s Fugaku in 2020. Summit’s role was to process data relating to over 40,000 genes, 17,000 genetic samples, and 2.5 billion genetic combinations. Summit helped scientists understand how the virus behaves in the human body, which is essential for developing both vaccines and therapeutics.
Two lessons should be learned from the use of HPC in the fight against Covid-19. The first is that multinational organizations, national governments, private companies, and academia can achieve a lot with technology in a short time when they collaborate. The second is that without both technology and cross-border collaboration, the loss and suffering caused by the pandemic would have been even greater.