Data centre computing is the beast at the heart of virtually everything done online. The data centre, or ‘bit barn’, is an organisation’s brain. It takes in, stores, analyses and disseminates the data through which the organisation lives and has its being. But what are data centres in business and what is their history and future trends?

By 2022, the Internet of Things – which refers to the network of every day ‘things’ such as vehicles, home appliances and other devices embedded with electronics and software which enable these objects to connect and exchange data – could be in full swing.

These connected devices send compounding volumes of sensor fed data to data centres where they might be stored and analysed by complex algorithms. These algorithms become more and more sophisticated as machine learning, voice recognition, facial recognition, augmented reality and other artificial intelligence (AI) technologies become pervasive.

Cloud computing, machine learning, augmented reality, internet TV, the Internet of Things, robotics, cryptocurrencies, voice, blockchain and cybersecurity. These are the big investment themes of tomorrow and they all have one thing in common: they generate a huge amount of data.

For these next-generation technologies to work well, the world’s data centres have to handle more data and reduce latency – move this data around faster.

What are the big themes around data centres?

Global data centre build out

The compounding demands of the Internet of Things, cloud computing, internet TV, gaming, artificial intelligence on digital lives are leading to a bottleneck in the data centre market which the Internet giants are rushing to fill – as well as telecom operators and large corporations, are building data centres across the world at a fast clip. These data centres take in, store, process and disseminate the explosive – indeed, exponential – growth in Internet data that form the raw material and brains of their operations.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Hyper=scale data centre

The internet giants, Amazon, Microsoft, Google, Facebook have been bringing hyper-scale data centres on stream over the last several years. These are data centres running upwards of 100,000 servers, their software-defined equivalents or vast arrays of cheap ‘bare metal’ servers running Linux control software. Many are running 300,000 servers and above. Industrial and commercial giants are going hyper-scale as well and running hybrid Internet-facing data centres in the cloud as well as onsite enterprise data centres.

The need for regional spread

In the era of globalisation, it is expected that high-tech data centres are the speciality of one or two countries, in the way that India specialises in IT outsourcing or the UK in financial services. Yet, issues of latency, the time it takes a data packet to transit between A and B, the need to minimise ‘blast radiuses’ in the event of outages, and the legal requirements around data privacy in different countries all mean that there cannot be single mega-data centres serving the world.

Cloud computing

Renting servers, storage and networking services from the Cloud can improve the quality of an IT environment while reducing the cost. Approximately 5 to 6% of the $2000bn which is spent each year on the world’s IT expenditure is currently spent through the Cloud model. As computing moves from in-house corporate data centres to third-party cloud data centres, corporations need to buy less of their own networking gear.

Edge computing

Much of the hard-core computer processing behind Internet apps is orchestrated in the cloud. However, latency and reliability factors mean that not all computing functions that are time critical can be done in the cloud, some have to be performed in onboard computers within devices. A wave of current technology cycles that depend on low latency and high reliability, such as artificial intelligence, autonomous vehicles or augmented reality, is pushing more raw computing power to the “edge of the network”, into endpoint devices like cars or iPhones. This means the results might have micro-data centres within devices, such as a data centre in a car.

MSP vs Colo vs on-premise

As cloud computing takes off, a general direction of travel has been established for enterprise-facing data centres. They are moving from on-premise data centres to colocation centres (Colo) to managed service providers (MSPs). In other words, they are being outsourced more and more to professional data centre operators. On-premise data centres are groups of servers that a corporation privately owns and controls and is often in the corporation’s own buildings. Colo providers lease data centre space, bandwidth and/or equipment to their customers whilst MSPs typically provide a range of IT Services up to complete IT management of the hosted environment, including the running of apps and support services.

Hybrid data centres

The majority of enterprises might not migrate their entire IT infrastructure to the cloud. Rather, they could create a hybrid data centre by combining two functions: renting servers and networking equipment from the cloud and building their own ‘Internet-facing’ IT facility, retaining their own secured enterprise data centre.

Public vs private cloud

Some businesses might want all their apps to remain locked in their own on-premise data centre. Others are moving to a private cloud and others still to the public cloud. Private clouds enable enterprises to have bespoke cloud infrastructure that is dedicated to a single organisation. Public clouds allow enterprises to access standardised cloud infrastructure services delivered to multiple organisations via the public Internet. On-premise data centres offer the maximum level of control and security. Public clouds offer the least amount of control and security. Private clouds lie somewhere in the middle.

Net neutrality

The vast majority of data flowing through the world’s telecom pipes come from the internet facing data centres owned and operated by internet ecosystems. Under net neutrality rules, telecom operators in many countries are not allowed to charge internet content companies at commercial rates that are proportionate to their usage. In the US, Republicans have always viewed this policy as a market distortion, while Democrats have seen it as a necessary policy to encourage an open, free and innovative Internet sector. Most, however, agree that net neutrality has reduced the telecom operators’ incentive to invest in high-speed broadband infrastructure, causing broadband infrastructure in capitalist economies in the West to fall behind more state-controlled telecom markets.

AI chips

Artificial intelligence is becoming more and more important in today’s data centres. AI comes in many forms – voice recognition, facial recognition, recommendation engines, gesture control, but the most important is machine learning, which refers to machines predicting outcomes by analysing large data sets to learn from successes and failures.

There are two aspects of machine learning:

  • training an artificial neural network with massive amounts of sample data; and
  • making inferences about new data samples based on the trained network.

Most ‘training’ occurs in large data centres typically using graphics processing units (GPUs), which are able to handle large ‘throughputs’ better than any other type of chip, because they have massively parallel architectures consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.

Field programmable gate arrays (FPGAs), or ‘programmable’ chips, tend to be better at ‘inference’ because they can reconfigure the hardware as the algorithms evolve and also provide very low latencies.

Quantum computing

Today’s computers are linear, they store all information as either a one or a zero. By contrast, quantum computers are more chaotic, a qubit can be a one or a zero or both at once. That allows a quantum computer to do certain calculations faster, helping some data centres perform more efficiently.

Software-defined networking

Software-defined networking (SDN) is a new architecture for telecom networks in which the emphasis shifts from hardware to software. It is becoming hugely disruptive because it fundamentally changes who controls the data centre. In the old days, all the intelligence of data centres was in its hardware, today, Facebook, Amazon and Alphabet control data centres, at least the Internet-facing ones, because most of the intelligence is moving towards their software code. Yet, SDN has been slow to take off, primarily because of cybersecurity concerns, but also because of net neutrality implications.

What is the history of data centres?

Before 1980, companies used mainframe computers to store and analyse data.

Two key technologies were critical to the first formations of data centres as we think of them today. Both occurred in the early 1980s; the first was the advent of personal computers (PCs). Thereafter, microcomputers begin to fill out mainframe rooms as servers, and the rooms become known as ‘data centres’.

A number of critical milestones since then changed the course of the data centre industry’s evolution.

By 2000, VMware had started selling virtualisation software, which split servers, which are just industrial scale PCs, into several virtual machines. Virtualisation software made data centres cheaper to run and capable of handling far more applications.

In 2002, Amazon created Amazon Web Services (AWS) as an internal division within its IT department tasked with making its use of data centres more efficient. By 2006, AWS, having transformed the efficiency of its own IT operations, started offering ‘Infrastructure-as-a-Service’ for sale to corporate customers. This put cloud computing within the reach of every business, whether it was a multi-national or a one-man-band.

In 2007, Apple kick-started the mobile Internet as we know it today with the launch of the first mass-market smartphone with multiple apps, a touchscreen and a revolutionary mobile operating system known as iOS. Google soon copied, with its own version, Android. With many of the iOS and Android apps running their Internet services from the cloud, a string of Internet-facing data centres sprang up.

By 2017, Internet traffic was growing at 24% per annum globally and over 80% of it was generated from consumer data, according to Cisco, largely from Internet-facing data centres such as those run by Facebook, Alibaba or Google.

This surge in Internet protocol (IP) traffic meant that data speeds had to rise to enable data centres all around the world to talk to each other (North-South traffic) and to communicate internally (East-West traffic).

In 2016 and 2017, many data centre operators decided to upgrade their optical interconnect equipment so that data transfer speeds could increase from 40G to 100G, which refers to a group of computer networking technologies for transmitting data in a network at a speed of 100 gigabytes per second.

The data centre story… … where did they come from and where are they going?

  • 1946: ENIAC, the first electronic computer, switched on to store artillery codes.
  • 1954: First fully transistorised computer to use all transistors and diodes and no vacuum tubes.
  • 1960: IBM System Series of mainframes is born.
  • 1971: Intel’s 4004 the first general purpose programmable processor.
  • 1973: Xerox unveils first desk-top system to include a graphical user interface and large internal memory storage.
  • 1977: ARCnet introduces first LAN at Chase Manhattan connecting 255 computers across a network at data speeds of 2.5Mbs.
  • 1981: PC era begins.
  • 1981: Sun Microsystems develops the network system protocol to enable the client computer user to access network files.
  • 1982: Microcomputers begin to fill out mainframe rooms as servers and the rooms become known as ‘data centres’.
  • 1999: The ‘dotcom’ surge, with its demand for fast connectivity and non-stop operations, turns the data centre into a service centre.
  • 2000: VMware begins selling VMware workstation, which is similar to a ‘virtual’ PC.
  • 2002: Amazon begins the development of ‘Infrastructure-as-a-Service’ at Amazon Web Services (AWS).
  • 2006: AWS starts offering web-based computing infrastructure services, now known as ‘cloud computing’.
  • 2007: Apple launches first iPhone, kicking off the smartphone industry and creating the mobile internet as we know it today.
  • 2010: First solutions for 100G Ethernet are introduced.
  • 2011: Facebook launches Open Compute Project (OCP) to share specifications and best practice for energy efficient data centres.
  • 2012: Surveys suggest 40% of businesses in the US are already using the cloud.
  • 2012: Docker introduces open-source OS container software.
  • 2015: Google and Microsoft lead massive build outs of data centres, both for their own use and to sell as IaaS services.
  • 2016: Google spends over $10bn in capex, mostly on data centres.
  • 2016: Industry spends $40bn in cloud data centre related capex, according to IDC.
  • 2016: Global population of ‘hyperscale’ data centres reaches 297, according to Synergy Research.
  • 2016: Alibaba becomes the world’s fastest growing cloud services company, with revenues rising by nearly 200% to $685m.
  • 2017: Intel demos 100G silicon photonic transceiver plugged into OCP compliant optical switch.
  • 2017: 30% of data centres migrate to 100G data speeds and an ensuing shortage of optical interconnect parts materialises.
  • 2017: Huawei and Tencent join Alibaba in major data centre build-outs in China.
  • 2017: First 400G optical modules for use in data centre and enterprise applications become available in the CFP8 form factor.
  • 2017: Big inventory pile up of optical gear in China is largely responsible for volatility among optical interconnect stocks.
  • 2018: Leading data centre operators start migration to 400G data speeds.
  • 2018: Massive China infrastructure upgrade begins, led by China Mobile and China Unicom.
  • 2018: Silicon photonics technology starts to impact data centre networking architectures positively.
  • 2019: Public cloud services reach revenues of $318bn, according to Gartner.
  • 2020: Cloud data centre related capex reaches $60bn, according to IDC.
  • 2020: Edge computing (i.e. micro-data-centres embedded in devices) revises the role of the cloud in key sectors of the economy.
  • 2020: Optical interconnect market turns over $17bn, according to Ovum, within which silicon photonics accounts for $700m.
  • 2020: Interconnect sector reshaped by combination of silicon photonics, D-I-Y data centre designs with contract manufacturers and M&A.
  • 2021: Over 450 hyperscale data centres operate worldwide, according to Synergy Research.
  • 2021: Data centre speeds exceed 1,000G.
  • 2025: Data centres will be increasingly in-device.

This article was produced in association with GlobalData Thematic research. More details here about how to access in-depth reports and detailed thematic scorecard rankings.