Containerisation and its impact on the data centre
Mon 9 Jan 2017
Organisations are continually looking for ways to improve the overall efficiency and longevity of their IT assets, deferring or avoiding spending for months or years, allowing them to divert much-needed funds to innovation projects or more strategic areas. Containerisation technologies like Docker, LXC and Spoon allow organisations to do exactly this, increasing the efficiency of servers, but it could also have a greater impact on the data centre estate that IT staff should be aware of.
For those unfamiliar with the brand, Docker enables organisations to fit approximately four to six times more virtual servers on hardware than a comparable ‘standard’ virtualised estate using a technology like Xen or KVM. There are a few caveats; Docker can run on Windows and Linux servers, but the Docker Engine sits directly on the host operating system (OS) with virtualised apps running in their own instances on top.
Given that containers on servers will spin up and down in microseconds, could this result in flash ‘heat floods’ or over-provisioning of infrastructure?
This is unlike the traditional model of virtualisation where a hypervisor sits on top of the main OS, capable of supporting multiple ‘guest operating systems’ with their own app instances in a four-level server sandwich.
This makes Docker much leaner and more efficient – rather than each having multiple OS’ and layers of complexity, containers are slimmed down and kept as resource-efficient as possible. This means that not only can more containers be packed onto the same server, but it also makes it faster, with containers capable of being started up in microseconds rather than minutes.
However, with this speed of deployment also comes new dynamics in the data centre; more virtual servers on a machine may lead to greater power draw, which in turn means hotter servers and increased load on the cooling. Given that containers on servers will spin up and down in microseconds, could this result in flash ‘heat floods’ or over-provisioning of infrastructure? Integrated automation between the IT and the data centre fabric with developing skills for data centre personnel is likely to become more important in order to maintain the right environment for business critical applications.
Understanding what is sitting on a server and its likely behaviour will allow data centre teams to correctly configure building management systems and the corresponding environmental control measures
Containing the heat
In theory, the ‘perfect storm’ of servers all spinning up new instances at the same time – for example, a group of retailers all hosted on the same platform, launching Black Friday promotions simultaneously, then spinning up new containers to cope with demand – could cause significant changes in the data centre environment if not planned for or managed in a pro-active manner.
Will we see additional pressures on data centre teams, increasing the burden of responsibility on those managing and maintaining facilities? Data centre teams may need a greater awareness of server loads and characteristics to help combat this. Understanding what is sitting on a server and its likely behaviour will allow data centre teams to correctly configure building management systems and the corresponding environmental control measures (cooling, humidity, electrical distribution and maintenance schedules).
So, if temperatures are likely to swing through changes in a less predictable manner in the data centre environment, or generally increase the temperature of the data centre as a whole, IT teams will need to ensure that the cooling and electrical infrastructure in the facility is able to cope with this.
Technologies allowing zonal heat management and control, more sustainable cooling measures like using natural airflow or placing data centres near renewable energy facilities can all help staff to manage the rising cost and complexity of keeping business critical applications running smoothly.
Automation in the data centre can also support the management and monitoring of this complexity, highlighting potential ‘hot zones’ and taking pre-emptive action to manage the risks associated with increased heat and fully integrate with components to complement any existing VESDA or fire control systems. These technologies are becoming increasingly advanced, displaying graphical heat maps of the facility and applying policy-based rules to ensure its safety and smooth running.
Staying cool in the future
However advanced these technologies, the data centre will always need a human overseer, applying years of knowledge and experience to situations, as well as an understanding of context.
Having good visibility of the goings-on in the data centre and the tools to provide a sound overview will only become more important as data centres become more complex; while Docker, containerisation and other developments like unikernels all promise to make technology faster and more agile, it is vital that IT professionals also understand the implications and knock-on effects. It is only by doing this that data centre teams, and the organisations they represent, can keep their cool when things heat up.