The evolution of the data centre: simplifying network architecture to reduce costs
Tue 22 Apr 2014
Virtualisation has changed forever the way data centres are run – and not always in ways that were anticipated. Now, writes Paul Bonner, the head of technical services at Hardware.com, operators can further reduce costs and simplify their networks by taking a multi-vendor approach.
Previously when IT professionals discussed network infrastructure and data centre costs, “simple” was not a word they often used.
Today, there are plenty of organisations that have many more data centres than is necessary. This is often a result of corporate acquisitions—new data centres are brought into the network, and more often than not it is considered simpler to keep them all operational. However, this could lead to an organisation finding itself with several data centres where only one would suffice.
Of course, these unnecessary data centres will inevitably lead to excessive costs, which can be quite substantial. Each one needs to be staffed by computer operators, system and facility engineers, as well as production control personnel (to name but a few). As outlined in a report by Computer Metrics, the costs don’t stop there. Hardware assets will most likely be “underutilised, as excess capacity must be maintained to handle spikes in demand for each individual data centre,” there will most likely be a degree of management duplication due to data centres being in multiple time zones, and additional software licences all need to be maintained (1).
However with the evolution of open standards equipment, virtualisation, and cloud computing, technology leaders are now able to place greater emphasis on simplicity when establishing their network architecture. Through data centre consolidation, coupled with adopting a multi-vendor or standards-based approach to network infrastructure, IT professionals can achieve significant cost, time, and energy savings.
To understand why and how this evolution is happening, it’s important to recognise two of the key drivers in data centre evolution:
According to a survey conducted by the International Data Corporation (IDC), the highest IT priority for Chief Information Officers in 2012 was virtualisation and server consolidation (2). Originally introduced as a way to achieve higher server density and therefore maximise an organisation’s investment in hardware, server virtualisation quickly became a way for companies to achieve significant cost savings, increase operational efficiency, and conserve energy.
Today virtualisation for consolidating servers and storage arrays is data centre best practice. Organisations increasingly turn to virtualisation when updating their systems in a bid to cut down on the number of machines ‘taking power out of the wall.’ With this shift from traditional, single-purpose application servers to immense computing networks, virtualisation not only ensures enhanced performance, but also reduces energy consumption by requiring fewer servers.
Numerous companies, including Hewlett Packard (HP) and IBM, have announced plans to reduce the number of their physical data centres in favor of virtualised servers. In fact server and storage virtualisation projects conducted by IBM in 2011 resulted in an energy-use reduction of over 142,000 MW—and cost savings of approximately $16.5 million (£10.2 million) (3). Similarly in 2007, HP announced that it would cut its global data centres from 85 to 6, saving the company an estimated $1 billion (£622.7 million) annually (4). In addition to reducing power and cooling costs, such initiatives will lead to savings by eliminating storage space and reducing the amount of staff needed to run the data centres—including facility engineers, operational specialists, and computer operators.
However, the benefits of virtualisation extend beyond cost savings. Each virtual server is able to run on its own operating system, and each can be rebooted independently of one another. Space is also conserved, as several machines may be consolidated into one server running many virtual environments. This, in turn, has the knock-on effect of having fewer physical servers, thus reducing hardware maintenance.
Organisations that virtualise their servers achieve faster network connections, increased data security (as data is stored in fewer places), and increased IT compliance. Furthermore, organisations that reduce their number of data centres free-up staff time, enabling them to focus on more important, strategic initiatives within the company.
There are several ways to create a virtual server, such as virtual machine, operating system-level virtualisation, and paravirtual machine.
A second emerging trend in data centre evolution consists of building network infrastructures using multiple vendors and standards-based technology. Over the last decade or so, leading technology vendors, such as Cisco, heavily promoted the “single vendor” approach to network architecture as an easier, more cost-effective way to build and maintain data centres. Although good in theory, the practice ultimately leads to less competitive pricing and reduced flexibility between platforms—driving-up data centre complexity and costs for customers. What’s more, vendors become complacent and take their customers for granted. It is incumbent upon CIO’s and network architects to regularly reassess their relationship and contracts with vendors—no matter how long-standing it may be.
Today, organisations are realising they are able to control costs and reduce network complexity by adopting a multi-vendor approach. According to research from IT advisory firm Gartner, organisations that introduce additional vendors to their data centres “reduce total cost of ownership by at least 15 to 25 percent over a five-year time frame.” (5) By introducing competition for existing products, organisations will ensure vendors are continuously vying for their business—keeping costs competitive for both short- and long-term budgets.
While some IT leaders may be concerned that adding technology vendors will only serve to increase network complexity, research from Gartner demonstrates this is not the case. In their report, “Debunking the Myth of the Single-Vendor Network,” Gartner found that a “surprising benefit from [our] investigation was that for most organisations, the complexity of the network was reduced when they introduced another vendor.”
The Gartner report clearly exposes a number of common misconceptions surrounding the operation of a multi-vendor network environment. A single-vendor environment is not less complicated, easier to manage, nor more reliable than a network operating with multiple vendors. The fact is organisations do not need extra personnel to manage a multi-vendor network environment, and, as outlined in the report, “the total initial capital costs and ongoing maintenance expenses of the environment were clearly higher in a Cisco-only network.”
Multi-vendor networks encourage the use of building infrastructures with standards-based technology instead of proprietary solutions, giving customers more options and greater flexibility in case of a product upgrade or technology refresh. In today’s rapidly evolving networking industry, avoiding vendor “lock-in” is key to controlling costs and ensuring interoperability.
Overall, the progression towards virtualisation and multiple vendors offers organisations an “out” (or at least a break) from today’s often overwhelming IT demands. By significantly reducing the number of physical data centres they operate, organisations can realise significant cost and energy savings while eliminating some of the complexity that comes from managing storage space, staff, and troubleshooting of multiple facilities. Additionally by retaining their primary vendor’s competing product on the floor and building with standards-based technology instead of proprietary solutions, IT leaders can ensure enhanced, more competitive pricing in the long-term while protecting their investment in hardware equipment.