The management of a data centre’s hardware is complex as it requires continuous monitoring and management by information technology (IT) staff, which raises operating expenditure. Any upgrades and new business requirements demand further investments and sometimes huge manual intervention.
Listed below are the key technology trends in hardware impacting the data centre theme, as identified by GlobalData.
Mergers and Acquisitions (M&As)
Technological change in the semiconductor sector will drive similar transformation in the data centre industry, with the impetus coming from several acquisitions. The largest in the industry is the $40bn acquisition of Arm by Nvidia, which is subject to regulatory approval that may take until 2022. Arm chips are increasingly used in data centres, where their low power draw can be an advantage.
Another key deal is Advanced Micro Devices’ (AMD) $35bn purchase of Xilinx, which would expand AMD’s rapidly growing data centre business. Another significant deal was announced in January 2021, with Qualcomm buying custom processing start-up Nuvia for $1.4bn. Qualcomm could use Nuvia’s specialism in server processors to advance into communications infrastructure and data centre applications in the future.
A new global chip standard, RISC-V, has begun to produce breakthroughs in chip design. One RISC-V microprocessor design has a clock speed of 5 gigahertz (GHz), considerably above an Intel Xeon server chip, E7, running at 3.2GHz. The RISC-V chip also burns just a single watt of power at 1.1 volts, a fraction of that used by the Intel Xeon. RISC-V is also enhanced by new instruction sets designed to boost its 3D graphics and artificial intelligence (AI) performance.
The RISC-V prototype is gaining interest because it eliminates a bottleneck that can exist with fast memory and slower chips. At the heart of the breakthrough is that the RISC-V architecture is open, unlike CISC, the instruction-set architecture of Intel’s chips. That facilitates new chip design choices to resolve the bottlenecks that are not possible if a chip’s instructions are locked down. The RISC-V instruction set is also much simpler and has fewer than a hundred instructions, which ultimately makes the chip production process simpler and cheaper.
The rise of edge computing will reshape the data centre landscape. Today’s IP networks cannot handle the high-speed data transmissions that tomorrow’s connected devices will require. Data often travels hundreds of miles over a network between end-users or devices and cloud resources in a traditional IP architecture, which results in latency.
Establishing IT deployments for cloud-based services in edge data centres in local areas brings IT resources closer to end-users and devices. Technologies to benefit from edge data centres include 5G mobile networks, internet of things (IoT) and Industrial Internet devices, autonomous vehicles, virtual and augmented reality, and AI.
Hyperscale data centres
Hyperscale data centres evolved from the needs of Big Tech companies such as Amazon, Microsoft, Google, and Facebook for datacentres able to support high levels of throughput, performance, and redundancy, while also providing massive availability. A hyperscale data centre will typically have thousands of servers and cover tens of thousands of square feet in size.
Hyperscale data centres will represent 53% of all installed data-centre servers by 2021, according to Cisco. Hyperscale data centres already account for 39% of total Internet Protocol (IP) traffic, and this number will jump to 55% in 2021. As the data centre industry grows, there will be an even greater need for more hyperscale data centres in strategic locations worldwide, including in more remote regions, particularly in the US.
The pandemic has accelerated the need to make systems less reliant on human intervention. That has, in turn, spurred a rapid embrace of data centre automation, with automation and robotics playing a much bigger role in facilities management. There will be more widespread use of robots to install servers in racks, swap out failed servers, manage disk storage and interconnection, and monitor site security. Facebook has already created a Site Engineering Robotics team to design and develop robotics solutions to automate and scale Facebook’s data centre infrastructure operations.
Massive leaps in AI processing technology could mean that conventional computing can run some of the processes previously thought to be only possible with quantum computing, such as mapping how proteins form 3D structures.
In December 2020, Google DeepMind’s AlphaFold 2 algorithm proved over 90% accurate in predicting protein structures at speed, a feat thought to be years away and needing a quantum computing element to achieve it. IBM and Intel, both also big in quantum computing, and start-ups such as Graphcore and Cerebras, have developed advanced chips specifically designed for AI processing. They promise an exponential increase in information density and processing power involving power draws of only a few watts.
The future will see more data centres using utility-scale energy storage to power the cloud with renewable energy. In 2020, both Switch and Google announced projects to begin supporting their data centres with large lithium-ion batteries. Energy storage enables large energy users such as data centres to overcome the unpredictable generation patterns of renewable energy sources, such as solar or wind.
Solar panels can only generate power in sunny weather, and wind turbines are idle in calm weather. Switch uses Tesla Megapacks to create energy storage capacity to support the use of solar power for its large data centre campuses in Las Vegas and Reno.
Intel recently highlighted advances in its work of integrating photonics with low-cost, high-volume silicon. Intel argues that this addresses challenges around the performance scaling of electrical input/output (I/O). New data-centric workloads are growing within data centres, with ever-increasing data movement from server to server that is taxing today’s network infrastructure.
Intel believes integrating silicon photonics and complementary metal oxide semiconductor (CMOS) silicon through advanced packaging techniques provides three benefits: lower power, higher bandwidth, and reduced pin diode count. Intel foresees more disaggregated future architectures, with multiple functional blocks of compute, memory, accelerators, and peripherals, spread through the network, interconnected via optical and software in high-speed and low-latency links.
New neuromorphic computing hardware that mimics the human brain’s neural systems could reduce computing’s carbon footprint. Neuromorphic chips operate in a much different way to the silicon chips found in traditional computers. In the brain, processing and memory functions are performed by neurons and synapses in a single location. It is expected that neuromorphic computers will perform these tasks on one chip, while conventional computers have separate memory and processing units. Researchers at University College London say a neuromorphic computer could use up to 100,000 times less power than conventional computers, making data centres much more efficient.
This is an edited extract from the Data Centers – Thematic Research report produced by GlobalData Thematic Research.