Hyperconverged Infrastructure – Where’s the Network?
Wed 3 Sep 2014

Chris Swan, CTO at Cohesive FT, discusses the new EVO:RAIL package launched last week at VMworld. He asks where the network will come into play beside compute and storage in such hyperconverged infrastructure models.
One of the big news items from last week’s VMworld was the launch of EVO:RAIL, a ‘hyperconverged infrastructure’ reference design with software from VMware and hardware from a variety of partners. The RAIL part of the name comes from the smallest unit of deployment that fits into 2U of standard rack space, and onto a single rail within that rack. EVO:RAIL is described as delivering ‘compute, network, storage and management’, and it’s worth picking apart what’s going on in each of those areas.
An EVO:RAIL package brings together four compute nodes, with each node containing (at least) two Intel E5-2620v2 hexacore CPUs and 192GB of memory. Since an EVO:RAIL package is advertised as being able to run ‘100 general purpose VMs’ that’s 25 VMs per node or just under 8GB RAM per VM (before counting any memory overcommit), which is pretty generous.
Storage for each node is made up of three 1.2TB drives for VSAN and a 400GB SSD for VSAN cache. There’s also a standalone drive for booting ESXi and a Virtual SAN certified pass-through disk controller. Depending on the parity scheme used the base system provides around 100GB of redundant storage per VM, which won’t cut it for ‘big data’ or video archiving, but should be plenty for most other use cases.
Management is where the new stuff from VMware comes to light. EVO:RAIL places the management of an entire virtual environment behind a single, simple and unified web user interface. That’s a lot easier than messing around directly with ESXi/vSphere for the VMs, then other tools for storage etc. It also eliminates the choice (and possible analysis paralysis) between vCentre Server (which is part of the bundle), vCloud Director and vCloud Automation Center. The system also includes Log Insight and comes with 3 years of support and maintenance. From plugging in to running VMs is supposed to take less than 15 minutes.
The networking for each node comes out as two 10GbE ports (which can be configured to use either 10GBase-T or SFP+) and a single 1GbE IPMI port for remote management. From a hardware perspective that means each EVO:RAIL is driving 8 ports on a top of rack switch (and another 4 ports for the out of band management network). There are a couple of curious things about this:
1. EVO:RAIL basically says nothing about networking equipment beyond the server NIC. The top of rack switch isn’t part of the design, leaving users free to make their own choice of networking vendor (or even use existing equipment if they have something suitable lying around). One view of this might be that top of rack networking is now such a commodity that it’s unimportant to the design – just pick any switch – it really doesn’t matter. It is however notable that any hardware switch configuration (for VLANs etc.) needs to be done as a standalone exercise, and isn’t in scope for the management tools.
2. That’s a lot of cables. Compute and storage might be ‘hyper converged’ but the networking really isn’t. Cabling can be a significant hidden cost when building out infrastructure, which will leave some wondering why the design doesn’t offer the option to use QSFP (like the Open Compute Project designs).
Of course hardware is just one part of the networking story, and with so many VMs in such a small amount of equipment much of the real action is going on in software. By bundling vSphere Enterprise Plus EVO:RAIL includes a distributed switch, and the web UI simplifies configuration of that. The system does not however include VMware’s NSX software defined networking (SDN) platform.
For now it seems that ‘hyper converged’ is mostly about compute and storage being brought together. It will be interesting to see what happens when the network is fully brought into the mix too.