What’s wrong with software-defined data centre (SDDC) definitions?
Wed 23 Apr 2014
The main problem is that the SDDC reinforces the divide between IT and Facilities and is symptomatic of an industry issue says Soeren Juul Schroeder. Unless the industry rethinks its organisational structures – and bridges that divide – we may fail to realise the potential of the software-defined data centre.
According to Wikipedia, the “software-defined data centre (SDDC) is an architectural approach to IT infrastructure that extends virtualisation concepts such as abstraction, pooling, and automation to all of the data centres resources and services to achieve IT as a service. In a software-defined data centre, ‘compute, storage, networking, security, and availability services are pooled, aggregated, and delivered as software, and managed by intelligent, policy-driven software.’”
While some people think that SDDC is just a bit of marketing hype – it was one of Computer Reseller News’ biggest stories in 2012 – many people believe that data centres of the future will indeed be software defined. That being the case, it’s pretty essential that we get the definition straight. If we take the raw definition that came out of VMWare, the SDDC provides great dynamics about the IT level – but what about the underlying infrastructure?
Can you run this very dynamic monster where nothing has to be turned on and off, nothing has to be plugged in or pulled out, and can you really run that on top of something that is very static, which typically is what our (data centre physical) infrastructure is? Unless we expand our definition and include DCIM so that the same level of extraction and pooling happens across the entire physical infrastructure, I don’t believe the SDDC will be a permanent part of the industry.
In my opinion, the SDDC is actually another case of the industry reinforcing the divide between IT and facilities, because it clearly stands out as an IT proposition. It has a purpose, it’s solving a lot of problems and it has a very clear value attached to it. However, unless we bridge the gap between them, this could become another siloed system of same old, same old.
The SDDC needs to be able to scale in terms of both physical infrastructure and IT dimensions. Today’s modular and prefabricated data centre technologies mean that we have the capability to let physical infrastructure scale up and down according to IT load – right-sizing. In which case, we already have a lot of the components we really need to make a truly dynamic data centre.
However, if you take what is defined as the SDDC and pull that all the way down into the physical infrastructure, for example, modular UPS’s and dynamic range cooling equipment, all these systems are capable of operating at different capacities.
Almost every data centre will have a large amount of monitoring taking place, data that is already available for use, which is where we approach the area of DCIM. We have this logic engine that understands and interprets data centre information and really puts it into the context of its operation. What we really need is to use that and try to take it to the next level of starting to scale up and scale down as your business or your business process scales up and down as well.
The way we are organised is something we need to look at as an industry. If we keep organising ourselves in the traditional IT and facilities silos, as the industry has done for the last 20 – 30 years, will we become the converged organisations that can handle these capabilities, especially with IT moving at a much faster pace than facilities?
Do we really have an organisational structure and a set of KPI’s to adhere to that type of thinking? I think maybe not at this stage, possibly in the future, but only human beings can make that happen.
About the author: Soeren Juul Schroeder is Software Sales Director, EMEA of Schneider Electric