IO CEO on why the future of the data centre must be software-defined
Thu 23 Oct 2014

The Stack speaks with George Slessman, CEO & Product Architect, IO, about the necessity of introducing software-defined approaches to the data centre.
What does IO identify as failing in today’s data centre model and why do you feel a software-driven approach needs to be adopted as an appropriate solution in reaction to the cloud?
The data centre industry needs to stay relevant to what the actual use of the data centre is. Often data centre operators do not fully understand how the data centre is consumed and what actually consumes the data centre resources: applications. Applications that are written to support business processes are placed in data centres, reside on hardware, and it is that hardware which then consumes the resources of a data centre.
That said, I think what has happened in the past is that there isn’t a tight enough integration between applications and the data centre. With the fundamental shifts occurring at the application layer – more discrete architectures and cloud-based and platform-based resources – the data centre needs to change similarly.
The primary failing today is that the data centre itself is disconnected from the application layer, physically and logically. The data centre is statically provisioned: You have static redundancy levels, static amounts of space, static amounts of energy capability, and static levels of security. Conversely, everything inside the data centre is enormously dynamic – things are changing by the millisecond. Also, on a macro-level, the data centre is changing significantly in how it is architected and delivered.
Our approach to the data centre has been to build a physical data centre layer that is modular in its approach. Our modules can be componentised, delivered in separate pieces, at the right size to meet changing needs. Also it is configured and managed by stacking a software layer on top of the components so that you create a smart data centre – a data centre that has a path to connect to the application layer and react in a dynamic fashion. The application layer is changing, and the physical data centre can change the way it behaves to support that.
As we move to a true cloud architecture, and to a much more service provider-centric model for the data centre, in our view it’s not going to be a nice-to-have, it’s going to be a have-to-have to be able to coordinate and orchestrate the activities of a data centre in conjunction with the application layer.
What are the main industry pressures driving the adoption of software defined solutions?
Cost is probably the single most important driver. It is the one thing that will change behaviour and change the way that adoption occurs. You can have a great idea, but if it doesn’t save you money or doesn’t generate revenue, it’s not going to get an awful lot of interest. So I would say cost is clearly the primary driver.
Secondary to cost is capability. The legacy data centre approach just simply isn’t capable of supporting the next generation cloud services and cloud architectures that are being developed.
Adding a tertiary element, the traditional approach to data centre security is lacking in how it deals with modern day threats of infrastructure disruption that we are seeing play out across all sectors of infrastructure. Electrical grids, water plants, and any other processing facility that is controlled by a traditional infrastructure management system are clearly targets, and the traditional data centre falls into that same vein.
In summary, cost is a driver. Software defined data centres drive down cost, reduce energy consumption, and increase sustainability. From a capability perspective, SDDCs put you in a position where you can support the next generation of IT kit and the next generation of application infrastructure. Thirdly, it creates a security posture that is much more capable of dealing with next generation security threats that we are already seeing emerge today.
You have recently been placed in the Magic Quadrant for DCIM tools by Gartner. They have forecast that By 2017 DCIM tools will be deployed in more than 60% of larger data centres in North America. What do you consider as the main advantage of deploying a DCIM platform?
The data centre operating system that we’ve built, the IO.OS is so much more than DCIM. IO.OS provides the same functionality as a traditional DCIM solution, but goes far beyond monitoring of data center components with remote control, simulation and auto-pilot.
In fact, Gartner positioned IO as a visionary in the Magic Quadrant for Data Centre Infrastructure Management. IO initially developed the operating system for our own needs across our own global data centre footprint. We now sell it to our customers and on a stand-alone basis.
The data centre operating system is a must-have as environments become more automated and more aligned to the next generation of cloud infrastructure and cloud software. DCIM allows customers to make fast, informed performance decisions and guides them to areas of potential improvement when it comes to sustainable operations and efficiency.
How do you see Big Data and analytics supporting future data centre optimisation?
The beauty of having a connected infrastructure – that is, modular data centres integrated with software – is that you get a set of standardised data that enables a number of additional opportunities for improvement.
Our operating system produces an enormous amount of data on a daily basis. We have more than 80 billion rows of data driving improvement, and about 3 million operating hours across our total modular footprint. We extract that data and our Applied Intelligence Team, which is based in San Francisco, California, uses big data analytics to improve data centre performance, predictability and capacity planning accuracy.
The legacy data centre approach just simply isn’t capable of supporting the next generation cloud services and cloud architectures that are being developed.
For example, we can look at our current energy consumption and compose analytical models for expected energy usage three months, five months, seven months from now. This allows us to be much smarter in how we stage and manage capacity. Also add to that the advanced functionality of simulation which allows us to ask and answer questions such as: What happens if we change these parameters? We can produce representations of the potential impact of data centre decisions on security, availability, latency and cost.
Not unlike what Tesla is aspiring to do with auto-piloting cars, we’re also aspiring to reach fully-automated capability in our auto-pilot tool in the next five years. This would allow us to make the data centre run itself inside a set of parameters and objectives, with an integrated toolset and the knowledge of historical data to be able to make the right decisions.
Strangely enough the data centre is the only part of the IT stack that hasn’t availed itself of this capability already and it’s clearly becoming a necessity in our view.
In terms of IO’s international expansion – how have you seen your services develop in Singapore over the past year? Are there plans to expand elsewhere in the near future?
We have six data centres globally, including one data centre in Singapore and a site in Slough, outside of London, that will be ready to go live in the first quarter of 2015.
Asia is at the centre of our future expansion plans. As we continue to grow the business we will be focused heavily on how we’re going to grow in the Asian market more broadly. Southeast Asia, specifically, is very important to us and will be a major point of emphasis for us over the next five years.
How does a software defined approach affect data security? Does it generally increase or reduce common risks?
We often hear people say that by adding a layer of software you’re in fact increasing the risk points in the data centre. That misconception stems from the fact that the most significant risk area for the data centre today has been ignored by most. In the past, building management systems or BMS tools managed the core infrastructure of data centres. They have historically not been considered part of the information security stack and so there hasn’t been a perception that there’s a risk point there. The truth is that there’s an enormous risk point.
Building management systems are the exact same automation systems used to control the chillers, generators, UPSs, and core critical infrastructure for data centres. Having been in the data centre industry for 15 years now, we’ve identified that this is a core risk point – it will be compromised over time by third parties looking to affect damage and disruption in infrastructure.
Our operating system brings that toolset which existed before into the modern age. Instead of using systems that were built 30 years ago and running them on proprietary networks – let’s integrate them into the security stack. We have re-written them with modern security infrastructure, and then have integrated and exposed that infrastructure to the security framework in most corporations.
Our feeling and our experience has been that the software-defined approach to the data centre itself really increases the visibility of threat across an organisation. It also closes an enormous security gap that’s in most organisations’ IT stack which they don’t even fully recognise – the vulnerability of building management systems in that stack.