Alan Conboy busts the common misconceptions surrounding hyperconverged infrastructure
Hyperconverged infrastructure (HCI) has gone mainstream, yet myths still remain that lead to misconception and confusion even among those that already have various HCI solutions deployed. These are five of the most prevalent myths debunked.
Myth #1 – HCI is more expensive than building your own virtualisation infrastructure
First of all, the acquisition price of HCI solution varies by vendor and often by the brand of hypervisor used in the solution. Secondly, while it can often be the case that purchasing the individual components needed to create a virtualisation infrastructure may be less expensive than purchasing an HCI solution, that is only part of the cost of the solution. The true and total cost of infrastructure goes far beyond the initial purchase.
The most compelling virtue of HCI solutions is that they make virtualisation easier to deploy, manage, and grow in the future. That simplicity and ease of use translate into a dramatically lower total cost of ownership over time. From deploying in hours rather than days, to scaling out seamlessly without downtime, HCI eliminates many of the major headaches that come with traditional DIY virtualisation solutions.
HCI uses automation and machine intelligence to handle a lot of the daily tasks typically associated with managing and maintaining virtualisation infrastructure. This ease of use and reduction of management time frees up resources to work on other tasks and projects. The savings can also include eliminating hypervisor software licensing, depending on the hypervisor deployed by or supported by the HCI vendor. The savings may vary by organisation, but nearly always, the numbers bear out that the good HCI solutions are less costly over a three to five year period or less. Total cost of ownership could be discussed in far more detail, but the next myth awaits.
Myth #2 – HCI leaves 30 percent fewer resources available right out of the box
This is definitely a myth where it depends on the vendor solution. There are a number of factors that can affect the available resources in an HCI solution, and the biggest is probably VSA-based storage architecture. Virtual storage appliances (VSAs) are used to emulate SAN or NAS storage in order to support traditional third-party hypervisors that were designed to consume SAN and NAS storage. These VSAs can be very resource-intensive and are required on each node of an HCI cluster.
VSAs primarily consume RAM, with many consuming 24-32GB or more of RAM per node as well as multiple cores per node. This can be a significant percentage of the server/appliance RAM and that is in addition to the RAM consumed by the hypervisor. Many VSA-based HCI solutions also require SSD storage to be used as a cache because of their inefficient data pathing that contributes to the RAM consumption.
Another resource guzzler is the three-factor replication required to support very large clusters to overcome the probability of two drive failures corrupting data. Three-factor replication means each block is written to three separate HCI nodes, for three copies of the data plus parity overhead, consuming a large amount of disk space. These solutions try to overcome this loss of usable storage through deduplication, freeing up more of the usable storage to fit more blocks.
Not all HCI solutions use VSAs or three-factor replication, however. Some use truly integrated hypervisors that allow for more direct storage paths without VSAs. The resources available on these HCI solutions with integrated hypervisors is much higher, or as much as you would expect on a highly available virtualisation infrastructure. As for three or more factor replication, that should only be required for very large clusters with a significant trade off of consumption for reliability and significant loss of efficiency.