fbpx
Features Hub Opinion

Don’t make a lift and shift-heap of your hybrid strategy

Fri 25 Jan 2019 | David Safaii

It’s time to dissect the hybrid playbook and call out “lift and shift” for what it is: a recipe for disaster, writes David Safaii, CEO at Trilio

We’ve heard the hype: hybrid cloud is the future. It’s limitless. It’s the ‘ideal’ IT model. It spins ethernet cables into gold. All for the low, low price of £9.95!

But when we consider why hybrid cloud strategies have become so popular, it really comes down to simplicity. Organisations want the flexibility of public cloud with the control of private cloud — and the line between the two should be seamless.

In reality, we’re quite far from a unified hybrid cloud platform that can deliver on these lofty requirements. But that’s beginning to change, and it all comes down to containers.

Containers have forced companies around the world, both new and old, to rethink their cloud strategy. After all, containers aren’t designed to be static; they’re not tied to any one cloud or virtualized infrastructure. Containers DEMAND hybrid cloud.

This is an instance where the IT demand for simplicity over complexity — while still maintaining ultimate control — has pushed the marketplace to evolve. How can vendors move workloads across multiple technologies, clouds, and data centres without interrupting production environments? How can developers gain on-demand / access to the resources they need when they need them? How is IT going to pay for all this? For all these reasons, containers have redefined the requirements for hybrid cloud environments to require a tightly integrated ecosystem of tools.

IBM’s $34 billion acquisition of Red Hat further confirms this market shift. 

IBM and Red Hat are both committed to containers as a key part of their growth strategy. IBM has long been a prominent supporter of Kubernetes.

OpenShift is a market leader in containers. If you believe in the power of the hybrid cloud and where the industry’s going, the acquisition will be significant.

But convincing IT teams to move more “business workloads” to the cloud is no easy feat. Legacy workloads are often a mixed bag of applications and data with stringent requirements for data protection.

“Businesses need more than raw data — they need entire workloads to be portable and immediately implemented throughout their IT infrastructure”

Moving these workloads to modern clouds, virtualized environments, and even containers is going to require a lot more than a “lift and shift” mentality. It’s going to require rearchitecting and a culture change.

Adopting ‘Cloud Thinking’: Why ‘Lift and Shift’ is a Recipe for Disaster

Clouds architectures are new terrain for many organisations trying to modernise. They are very different — and, in many ways, better — than traditional on-premises infrastructure. Despite this, many IT teams are often adamant that their existing VMs be adjusted to work on the new platform. This strategy almost never works. Here’s why.

From a responsibility standpoint, cloud environments distribute much of the day-to-day operations to individuals on their team. By and large this is a win; administrators no longer need to worry about individual VMs on a granular level. No need to sweat the small stuff.

But when it comes to applying this principle from a technology perspective, VM-driven organisations are often tripped up. They try to overlay multi-tenancy on top of their virtual environment, creating a “tenant” for each department. But cloud tenant environments are designed as user zones, not department-wide workspaces.

This disconnect in expectations can lead to teams sharing virtualized resources without an understanding of the full scope of data and applications within it. Small changes by an individual using a department-wide tenant environment can have serious repercussions, particularly when it comes to policies and compliance.

Unfortunately, there’s no direct translation between how VMs are defined in the legacy world and how they are defined in the cloud.

When administrators create VMs in their on-premises infrastructure, each configuration is custom to that use case, including port assignments, network connections, and more. Cloud VMs require the inverse approach: start with the most general configurations (flavours, networks, and other policies) before you tackle the most specialised use cases.

This means that companies need to modify or transform their VM to include cloud configurations. If they simply modify existing production images, they’re crippling their own ability to spin up second, third, or fourth copies in the future — it’s not repeatable and it’s impossible to scale.

By contrast, if they duplicate the production images to create a new cloud-ready version of each VM, then they risk bogging down their stack with tons of images that are all consuming storage space. Layer on vendor specific issues, including the restrictions that come with proprietary data formats, and you have downright data disarray.

From this standpoint, hybrid clouds seem unappealing and risky.

This lack of a truly integrated hybrid cloud platform, coupled with the dawn of containerised workloads, has driven the need for cloud-native data formats. Businesses need more than raw data; they need entire workloads to be portable and immediately implemented throughout their IT infrastructure.

Those workloads also need to be backed up and protected, no matter where they reside. Until cloud-native becomes the norm, it’s nearly impossible for a company to go all-in on hybrid clouds, resulting in many of the disconnected, dispersed multi-cloud environments that exist today.

It’s impossible to achieve the efficiency and flexibility promised by hybrid cloud without cloud native and cloud agnostic platforms. Organisations must be able to quickly spin up any workload on any platform without loss of features or functionality.

Cloud native formats help bridge the gap between your cloud and on-premises systems without compromising data quality. Cloud native solutions not only enhance performance, but also reduce time spent on management activities. This makes data easier to migrate, back up, and maintain — and thus easier to restore to full working copies.

How ‘hybrid’ should you become?

As hybrid clouds continue to evolve and improve, your organisation should consider adoption more seriously. There’s no one-size-fits-all approach, but there are some serious considerations to make, regardless of organisation size and maturity. Here are the top three things to give some thought to:

Storage costs & organisation

Likely your first priority when considering a hybrid cloud infrastructure will be identifying where the data will be stored, and how you will access it.

Cloud storage (like Cloudian, StorageGrid, Scality, or AWS) is appealing for many reasons. It allows easy access to high-performance object stores over the internet. It’s highly available and durable. It’s secure. It’s cost-effective.

Despite all these advantages, cloud storage is not a file system. It loosely supports the write-once-read-many (WORM) data access model. These systems don’t support random access to objects, which makes modifying existing objects (especially large objects) unwieldy. Consequently, organisations typically leverage cloud storage to house only static data like websites, documents, and archival backups.

However, applications often need to use cloud storage, which creates an intrinsic mismatch in what type of access it provides versus what applications expect. Most enterprise applications are written for POSIX-based file systems or network file systems, which cannot allow applications to consume object storage directly.

To solve this, your organisation will need to find a way to access cloud storage directly via S3, or make do with NFS gateways that act as caching devices, which present file semantics to enterprise applications but allow files to persist as objects in object store.

Consequently, the usefulness of object store is severely restricted, and many vendors use it solely as a dumping ground for data that is not needed on a daily basis, if they use it at all.

“Look for a solution that enables applications to persist in cloud storage and are accessible from anywhere”

If that’s your intention when implementing cloud storage, then great: by all means, continue with that strategy. If not, you should identify a way to natively access cloud storage regularly and easily.

Policies and tools for data migration

As organisations think about the safeguards they need for their newly created cloud infrastructure versus existing legacy infrastructure, many consider it from the VM level upward: if X system only has Y data type, do you need to make that system as secure? Does it still need data protection? Does it need the same level of uptime monitoring, management, and oversight?

For multi-cloud infrastructure, this approach is fine. But the beauty and the terror of hybrid cloud both lie in the ease with which workloads can and should be moved between the two.

The crux of hybrid clouds is their ability to seamlessly unite two varied infrastructure systems, in which case you’ll need to bet that your employees are shifting workloads between multiple systems and architectures without skipping a beat.

There are likely ways to limit this activity, to set and enforce policies that will provide checkpoints before workloads are shifted from on-premises to cloud and back again. But as containerisation continues to take hold, this strategy is guaranteed to be fighting an uphill battle.

More reliably, and more realistically, your IT organisation should focus its efforts on bringing all infrastructure up-to-snuff and enterprise-ready so that it doesn’t matter where the data is stored — it’s protected regardless.

Vendor integrations

Gone are the days when enterprises were tied to a given platform or vendor for decades. Today, enterprises have dozens of options when it comes to public clouds and private clouds, and they must have the ability to change their underlying platform at will in order to meet the ever-changing needs of their business.

But these cloud systems also need to integrate seamlessly with each other. The industry has seen a pairing-off of vendors: Amazon with VMware, IBM with Red Hat, Microsoft with… itself?

In this rush to find a cloud solution to marry its legacy infrastructure, these vendors have made efforts to pave the way for hybridization of their solution with an ally. But as your company considers implementing a new infrastructure strategy, it may be the right time to evaluate your current and planned vendors holistically.

Central to hybrid cloud enablement is the need for vendor freedom, which means your critical business applications are not tied to any one platform and have the ability to be deployed anywhere.

Look for a solution that enables applications to persist in cloud storage and are accessible from anywhere. Even better, find business applications that leverage platform-independent data formats.

This is only the beginning

When it comes down to it, “cloud thinking” requires a fundamental shift in the way IT teams operate today. Organisations need to understand the ramifications of their VM location and optimise like they already do with storage optimisation.

Experts featured:

David Safaii

CEO
Trilio

Tags:

Cloud containers hybrid cloud multicloud private cloud strategy
Send us a correction Send us a news tip