Q&A: State of Kubernetes in the UK, with Jetstack CTO Matthew Bates
Thu 15 Oct 2020
Matt Bates is co-founder and CTO at UK Kubernetes outfit Jetstack. His background is in solutions for the acquisition, management and exploitation of large-scale data. Since its launch, he has contributed widely to the Kubernetes project, both to the technology and to the ecosystem. He was also an early employee at NoSQL startup MongoDB, and previously at Deutsche Telekom R&D and Detica. In this Q&A Matt discusses Kubernetes potential, challenges and the state of maturity in the UK
Why is Kubernetes such a game-changer?
Kubernetes enables developers to be productive, enabling them to focus much more on the development of applications, rather than being tied up in a web of complexity provisioning and managing cloud infrastructure.
With Kubernetes, developers declare how their software should be deployed, using application-level abstractions – the so-called “desired state” – and Kubernetes actualises it. For many of our customers, this is as simple as a pull request in a Git repository for a developer to deploy their application. Automation kicks in and Kubernetes does the heavy lifting with the infrastructure: working out where to schedule the workload, wiring up the networking, including cloud load balancers, as well as provisioning and attaching storage, and much more. In other words, many of the tasks that typically slow developers down and allow mistakes to creep in are automated with Kubernetes.
This automation doesn’t just happen at the point of deployment; the state is always being reconciled, so if applications and infrastructure fail, they can be automatically healed without manual operator intervention – far fewer out-of-hours callouts is always a win! Kubernetes can also autoscale applications on demand and perform rolling upgrades as applications are upgraded. This involves complex orchestration, yet it’s built-in and automated.
Alongside all of this, Kubernetes is all cloud-agnostic too; there’s no need for redesign or to re-architect the infrastructure if you opt to move to a different cloud provider. It’s very much an API for the modern cloud without vendor lock-in.
What are some of the complications organisations are having getting the most from Kubernetes?
Like most infrastructure systems, it’s fair to say that Kubernetes is not without complexity; it has a reasonable learning curve and it can take time to get right, especially to operate in production. The Kubernetes ecosystem is also experiencing rapid innovation, meaning that for new or inexperienced developers in particular, it’s not always easy to know the right tools to use. It comes with most batteries included, but not all – something that’s quite intentional since Kubernetes itself is not meant to be a complete platform, but rather the foundation on which to build platforms. As such, many companies partner with Jetstack to help them bootstrap and start off on the right footing, avoid costly mistakes, and support them all the way through to production service with tried-and-test architecture, patterns and practices.
How does Jetstack help companies simplify cloud-native projects?
As we see it, choosing to use Kubernetes as an application platform, and using it for proof-of-concepts, is just the beginning. Since the early days of the Kubernetes project, we’ve been helping organisations to take this adoption to the next level and use Kubernetes confidently in production across their businesses. It’s all about empowering development teams with the tools they need to move quickly and safely, and we have the blueprints for how to do it. That means partnering closely with teams, providing training for developers and operators to understand these new patterns and practices, and working alongside them to build the systems that work for them with our expertise and experience.
We also feed much of our real-world experience into open source innovations so the wider community can benefit. We’ve been working on the cert-manager project over a couple of years now; it extends Kubernetes and OpenShift to automate issuance and renewal of machine identities: the cryptographic keys and digital certificates that underpin machine-to-machine communications. For our customers, this is one less complex infrastructure component to manage, and it’s been transformative to organisations that were previously used to the rigmarole of manual operations that often involved use of emails and spreadsheets to get a certificate! As our new colleagues at Venafi would attest, more automation means an improved security posture and a reduced chance of costly outages.
What is the state of Kubernetes maturity in the UK?
Adoption of Kubernetes has increased significantly in the past 12 months in the UK and we now see many companies that are using it at scale. In the early days, it was the tech startups that led the charge, but we now see broad interest from across the market – from media companies to banks, telco to pharmaceuticals. In fact, there are many leading UK web properties that many of us all use day-to-day that are backed by Kubernetes, some of which we help to support around-the-clock.
What Kubernetes questions are UK companies asking Jetstack that they weren’t previously?
Having seen success with Kubernetes and how it’s enabled developers to innovate and bring more rapidly bring products online, companies are now looking roll it out further. We now see companies moving to the next stage of adoption, wishing to make it available to many more project teams across multiple business divisions. In some cases, this involves a complete replatforming across the board, which means lots of clusters – often in the 10s and 100s – that may span failure domains and cloud providers.
The challenge is then how to manage clusters at this scale, how to secure and interconnect them, and put in place the requisite controls and visibility. Developers want the freedom of the Kubernetes API and all it enables, but protections are required in enterprise, especially in regulated environments. Importantly, these protections need to be in place without slowing developers down.
As the richness and the complexity of the applications coming to Kubernetes also increases, so do the requirements for these platforms. Developers now need the ability to use stateful services as rapidly as they can deploy to Kubernetes, so we’re seeing databases, messages queues, stream processing and more come to Kubernetes in a native way. Customers are asking how to do this and what to use, and how it should be managed securely at scale for many tenants across the business.
What is needed to take Kubernetes-driven cloud-native environments to the next level?
Pretty much out-of-the-box, Kubernetes gives us plenty of capability – scheduling of containers to large numbers of VMs, monitoring and health checks of those workloads, rolling upgrades, load balancing – the list goes on. To get it to that next level, it’s the tooling that’s built on the foundations of Kubernetes that helps drive even further automation. With much less in the way of infrastructure toil, focus can be centred on software development for developers and building compelling products.
There’s now lots of tooling available across the ecosystem, most of it in the CNCF, for further platform capability enhancement. For instance, GitOps for automated application delivery, service mesh such as Istio and Linkerd, for traffic management, security and observability.
In cert-manager, we build on Kubernetes/OpenShift, making it seamless for developers to safely use and consume certificates to secure endpoints, whilst giving platform and security teams the control and visibility they require.
What are your Kubernetes predictions for 2020?
As businesses continue to realise the benefits of cloud native, we’ll see much more adoption across the organisation; clusters will proliferate across clouds, but also into on-premises and bare metal environments, as well as edge locations, such as in retail and telco. Kubernetes will be pervasive, establishing itself the de-facto API for application deployment for many enterprises in 2020.
I think we’ll continue to see more flavours of workload come to Kubernetes, including stateful systems, AI/ML for data science, as well as HPC. Interest will also keep picking up with serverless; event-based, scale-to-zero compute is often a good fit for many modern cloud native workloads. The likes of Knative bring the best of these serverless patterns to Kubernetes, but also means that platform developers can tap into the powerful Kubernetes API and all its primitives if they wish to build something outside these abstractions.