The history of data centres
Tue 21 Jun 2016
Today, data centres can be large and complex enough to influence the economy of the region hosting them. Extreme examples of this are Switch’s SuperNAP campus in Las Vegas, together with the newly-emerging SuperNAP Reno. According to Brian Sandoval, Nevada’s Governor, this project will bring $1 billion of investment, giving the state a massive economic boost while making it the most digitally-connected in the US. Switch has already become a major economic force in Las Vegas, where its construction projects have created hundreds of jobs.
Of course, data centres haven’t always been this big or dominant; their rise has reflected computing’s growth from a productivity-enhancing tool to an essential part of our working and social lives. Steadily increasing computer performance coupled with continuous reductions in cost, power consumption and size have been key drivers, but there are other factors as well.
Commercial computer use developed from the mid Sixties, when one early data centre comprised a couple of IBM 7090 mainframes running a reservation system called Sabre, jointly developed by IBM and American Airlines. The system, which processed 84,000 phone calls per day, was located in a specially-designed computer centre in Briarcliff Manor, New York.
Technology developments made computing more accessible from the early Seventies. In 1973 the Xerox Alto represented a landmark step in personal computers because of its graphical user interface, bit-mapped screen, large internal and external memory storage, mouse and special software. The threat of disaster wasn’t as critical as it is now, because most computers were used for after-the-fact processes such as book-keeping rather than real-time transaction processing. Nevertheless formal disaster recovery plans were documented from 1973, and Sunguard™ developed the first commercial disaster recovery business in 1978.
However the late Seventies also saw computers moving into offices, and data centres died for a while. This trend accelerated in the Eighties, after the birth of the IBM Personal Computer (PC) and the boom of the microcomputer era. Computers were installed everywhere, with little thought for the specific environmental and operating requirements of the machines.
In 1988, IBM introduced their Application System /400 (AS/400) which quickly became one of the world’s most popular business computing systems. Then, as information technology operations started to grow in complexity, organisations became aware of the need to control IT resources. As the Nineties arrived, microcomputers – now called ‘servers’ – started to reappear in computer rooms, with the installations being called ‘data centres’. Inexpensive networking equipment accelerated these developments, as the client-server computing model was born.
The dot-com bubble introduced the boom of data centres, as companies needed fast Internet connectivity and non-stop operations to deploy systems and establish a presence on the Internet. Many companies then started building very large facilities to provide flexible hosting services for third-party users. By the early 2000s, the growth of data centres made their power demand an important issue; hardware makers began to focus on improving power efficiency, while data centre owners concentrated on improving cooling and airflow efficiency. The concept of virtualisation also became established as a route to optimising use of computing resources.
Today, Software-as-a-service (SaaS) is shifting the need for computing resources out towards a subscription and capacity on demand model. This model favours co-operation between network infrastructure and data centre operators to deliver a huge increase in data bandwidth. Large Internet companies with huge subscriber bases are leading in the design of distributed cloud data centres. Google, for example, is believed to operate 33 data centres around the world, containing an estimated nearly 2 million servers between them.