The myth of cloud mobility and a recipe for real agility
Tue 23 Feb 2021 | Tim Smithson
What is the true path to mobility, agility and cloud ability?
Come to the cloud, they said. Come one and come all and enjoy the multifarious models, functions and extensions that cloud computing offers with an infinite promise of agility, mobility and flexibility.
Well yes, that’s the opening sales gambit. But as many of us know, the reality is somewhat less agile, rather lower down the mobility scale and inherently less flexible.
Many vendors, either on the cloud or on-premise side of the argument share some traits, and that seems to force customers to be locked into one technology or another, of course this is inevitable to a certain extent, but there is a growing trend to allow and provide more openness.
While cloud stands for reducing Capital Expenditure (CapEx) and increasing agility promoting more fluid IT procurement channels where Operational Expenditure (OpEx) can be raised and lowered according to demand, the modern state of public cloud can be altogether more unyielding and fixed.
The methodology & mechanics of the myth
Real world cloud rigidity (rather than much-fabled agility) exists for two primary groups of reasons: one is economic, and one is technical.
Economic reasons for lack of cloud mobility include licensing regulations, the presence of pre-agreed purchasing contract stipulations and the related area of so-called ‘reserved instances’, where customers are promised specified bulk discounts based upon the purchase of a more predictable level of cloud capacity. This sounds rather more like a legacy model.
Other economic reasons here can come down to historical customer-vendor relationships, preferred supplier arrangements and other corporate-level agreements.
Those aspects are all real, but there is a more fundamental reason why cloud mobility has suffered from mythical misperceptions.
The ups and downs of cloud data egress
At the more technical (and more practical) end of the scale, moving cloud instances can be tough. Add to this the issue of egress, i.e. Cloud Services Providers (CSPs) will generally work on an upload free vs. download pay model.
But cloud flexibility should exist and needs to exist; that way customers can make the most of the pricing and performance differences offered by different clouds. It should also mean that customers are able to take the same application or implementation and move it to different geographic regions to reduce latency where needed.
But these are perfect world scenarios and they mostly, or at least rarely exist in the real world. Compound the real world reality of cloud inflexibility with the skills silo that will naturally develop inside any given cloud deployment (where proficiencies are aligned to the nuances, efficiencies and protocols of a given CSP) and you can see why lift-and-shift cloud-to-cloud becomes an unrealistic practicality for so many organisations.
As cloud engineers know, when you’re building an application with specific services that come from a particular cloud provider, any even remotely average application would normally need to be refactored if it were to ever be practically moved.
Strategies for reducing vendor lock-in?
If we separate applications from workload, the applications contain the code that performs a given function, what remains of the workload are the data resources. Moving an application could become redundant if the application can be destroyed and rebuilt. It is the underlying resources that make the application useful and is the core value of any workload.
In an ideal world you want to modernise your application layer and programmatically control it so that you can automate the construction (and destruction) of the application to any cloud. This helps reduce egress. However, adopting a specific cloud service will require refactoring for each provider. We know there is great value in modernising your applications and utilising micro services but you risk being locked into the given providers services. The workload’s data is also inextricably linked to the data service supporting it, so shifting this becomes equally difficult as the underlying storage architecture or database may need refactoring and that is also the bulk of the data that must still be transferred, this could be based on yet another set of cloud storage or DB service, not to mention the dreaded egress cost.
If an organisation wants the pick and mix efficiency and cost effectiveness of cloud portability, then the only practical way to do it is to obfuscate the lower substrate through a higher tier of hyper-based computing. Further still, the only way to build efficiently is to build cloud applications on a cloud platform that is agnostic in and of itself.
This process should ideally start at the initial planning architectural stage so that the organisation can reduce the amount of lock-in experienced throughout the cloud software development lifecycle. There are both valid economic and technical reasons to have multiple cloud providers, but a business needs to think about limiting the number of restrictions that will govern its cloud deployment once it is live.
A recipe for cloud agility
A prudent way to consider cloud architecture planning processes is to think about them backwards i.e. always ask what it would take to deconstruct the entity that the cloud development team has built. What dependency have you built into specific services and what would happen if you tried to move that application or workload from one provider to another.
New-age cooking techniques are fond of offering deconstructed recipes, so take that approach and make sure the business knows what the ingredients list is, what the cooking technique requires and what kind of heat source is needed before cooking is executed. Otherwise, we risk creating dishes that are so complex we cannot even unpick them to determine the ingredients.
Cloud mobility is possible and cloud can provide organisations with core advances in application and data services flexibility and control; it’s just a question of knowing how to combine the right components in the right quantities in the right shapes at the right time in the right place… and then, crucially, knowing which table to serve them on.
A strategy to combat ambiguity and complexity
The strategy should be simple, build your infrastructure and platform as a service on a multi-cloud operating system that provides an obfuscated layer. This allows organisations to benefit from a modern cloud operating system that can live in the cloud or on-premises. The benefit to any organisation comes from utilising a common set of web services and microservices without tying them to a rigid delivery mechanism.
If there is one takeaway from this analysis, it should be the inherent levels of dynamic change, enhancement and augmentation that exist at almost every level of the cloud computing universe. Cloud is developing further now, today, right as we speak. New data models are being created, new workflows are being enabled and new services control nuances are being implemented around the clock. Organisations need to shake off their inertia factors and appreciate the cadence of the cloud itself. This is the route to mobility, agility and cloud ability.