The Stack Archive

Why Dynamic Hyperconvergence is the gateway to the Software-Defined Data Center

Mon 23 May 2016


Ron Nash 2016.jpgRon Nash, Chairman and CEO of Pivot3, explains why dynamic hyperconvergence is the next step in the evolution of IT architecture…

A fundamental change is taking place in the nature and application of data center technology to today’s digital business – a change with far-reaching implications for companies of every shape and size. Across the entire IT industry, a paradigm shift is under way, driven by the demands of an increasingly competitive business environment and profound changes at the architectural level which touches everything, from edge to core.

Computing and storage platforms leveraged by most organizations today are not equipped to keep up with the breakneck pace of global business, nor are they able to handle the challenges associated with the massive growth of data. As more CIOs and IT decision makers look to reduce infrastructure costs while increasing efficiency and agility to accommodate the needs of the business, the move toward software-defined technologies marks the beginning of the journey to evolve IT for many organizations.

These technologies are all leading to one place: the software-defined data center (SDDC), which is defined by the use of software to provision and optimize all elements of IT, such as networking, storage, compute and virtualization resources. Hyperconverged infrastructure (HCI) is a powerful, emergent technology that is the first step on the path to the SDDC.

The main architectural components are controlled through a software layer that sits atop standard, off-the-shelf hardware. Typically, hyperconverged infrastructures feature automated deployment and orchestration features, pooled IT resources, single-pane-of-glass management, out-of-the-box flexibility and holistic scalability and predictive monitoring and analytics of an entire shared infrastructure.

So how does hyperconvergence put organizations on the path to the SDDC?

The software-defined journey

Building software-defined environments requires a completely new perspective on IT processes. Replacing traditional hardware-based data centers with software-defined capabilities can be a daunting experience, especially for organizations that have been running on legacy hardware for an extended period of time. Hyperconvergence simplifies that process with an automated user experience that helps align IT with business objectives.

To truly future-proof your organization with hyperconvergence, customers should look for Quality of Service and dynamic provisioning capabilities rather than the more nebulous benefits of simplicity and savings.

When evaluating hyperconverged vendors, it’s important to keep in mind that not all offerings are created equal. At a high level, benefits like increased agility and reduced risk associated with siloed IT operations help address challenges with workload utilization, yet performance remains a gating factor beyond single use cases like virtual desktop infrastructure or disaster recovery.

As organizations continue to move towards digital-oriented business models, they require a flexible infrastructure that keeps pace with the growing amount of data IT is expected to manage. Hyperconvergence can provide the scalable infrastructure that will replace aging, overly complex data centers and displace legacy storage technologies, paving the way for a fully virtualized software-defined data center.

Yet most hyperconverged platforms cannot claim or achieve this today. A truly on-demand infrastructure requires optimizations that get the right resources for the right workloads at the right time. In order to leverage this level of control over your IT ecosystem, hyperconverged infrastructure requires performance assurance capabilities where management, deployment and policy settings are streamlined and automated. To truly future-proof your organization with hyperconvergence, customers should look for Quality of Service (QoS) and dynamic provisioning capabilities rather than the more nebulous benefits of simplicity and savings.

Customer demand for decreased complexity and more effective management is driving more businesses to a fully realized software-defined data center, and hyperconvergence is a critical path to getting there, making it the ideal vehicle to truly usher in the world of software-defined everything.

A smarter convergence of technologies

Hyperconvergence is already disrupting traditional approaches to data center architecture and becoming very prevalent in the marketplace. Yet most hyperconverged vendors still focus on how well things work inside a self-contained system, locking them into their existing infrastructure. This actually prevents customers from realizing the full value of their investment in new technology. The key here is open architecture, or at least the ability to embrace it. Adopting hyperconverged systems should not only work well within one’s own architecture, they should be pluggable into existing systems that stretch their benefits outside their IT ecosystem.

To enable this kind of transformation, a smarter convergence of technologies is required. Customers need a solution suite that allows them to add storage and compute as independent building blocks, works with multiple external compute platforms and storage arrays and embraces a move toward a software-only strategy where partnering with leading providers of both software and hardware help IT build best of breed solutions to provide maximum interoperability and functionality.

Enter dynamic hyperconvergence.

What is dynamic hyperconvergence? What does a dynamic system look like?

Smarter infrastructure begins with dynamic hyperconvergence – a system with software-defined capabilities to govern and guarantee performance and how data is secured, stored and protected. There are two key innovations behind this concept, the first of which is erasure coding.

Nearly all hyperconverged vendors rely on replication to ensure fault tolerance and high availability, which is inherently inefficient due to CPU-intensive processes like compression and deduplication, among others. Erasure coding provides maximum use of installed storage capacity, frees up valuable CPU processing power and ensures an up-time all the time environment with unmatched levels of availability and fault tolerance. In the past, businesses looking to expand their storage capabilities could only choose an option that was fast, stable, or cheap—never a combination of any of those options. Erasure coding gives businesses the option of choosing a data protection scheme that gives them all three.

The second important element in the equation is a Quality of Service layer on top of both storage and hyperconverged systems that is capable of governing performance targets, input/output (I/O) prioritization and data placement so customers can meet business service-level agreements with the level of flexibility they require. QoS gives the system the ability to provision workloads appropriately and differentially based on which applications are mission or business-critical which significantly enhances the user interaction with their SDDC. With QoS, customers get the performance they expect while avoiding end user complaints and any negative impact on business productivity.

The winning combination of erasure coding and QoS aggregates all resources for maximum performance and scalability, provides optimized resiliency for continuous operations with little to no performance degradation during load, stress and failure, and provides predictable performance so customers can confidently create the SDDC on their own terms and timeline.


Dynamic hyperconvergence is the next step in the evolution of IT architecture, breaking the chains associated with management silos and islands of storage and hardware. The modular building block approach, with predictable performance and the simplicity and economics organizations require to be competitive, makes the migration towards the SDDC less of a risk and more of a viable avenue for bolstering effective business processes.

When evaluating hyperconverged vendors, ask yourself, “Does this extract the highest levels of performance possible from my existing environment? Does it offer assurance levels to ensure that critical services and applications are not impacted in extreme demand scenarios? And does it have the built-in ability to deploy multiple, mixed workloads on a hyperconverged platform as well as standalone flash storage products?”

The right hyperconverged platform allows customers to forego the need to constantly reconfigure IT for certain purposes and replaces it with an on-demand, dynamic system that puts the focus back on what matters – your business.


Data Centre feature software-defined data centre (SDDC)
Send us a correction about this article Send us a news tip