fbpx
Features Hub Opinion

Do you have proper Kubernetes security policies in place?

Wed 9 Jan 2019 | Brian Johnson

On the back of last year’s high-profile discovery of a privilege escalation flaw in Kubernetes, CIOs and CSOs are in need of solid security strategies to protect applications under the helm of Captain Kubernetes. Brian Johnson, CEO of DivvyCloud, explores the complex task of managing massive, distributed systems built on open source technologies

As Kubernetes solves a few key problems in cloud its adoption has skyrocketed. Containers are great for both lightweight and consistent compute footprint across test/dev and production environments, but managing the bundling of a set of containers for each application component is a time-consuming byproduct. Kubernetes simplifies this with manifests.

Containers also help maximise the utilisation of the underlying infrastructure, such as CPU and memory. But then the problem becomes connecting Kubernetes to IaaS layers to enable the proper scaling of the infrastructure. Kubernetes provides container-native tools for scaling automatically across an infrastructure footprint.

Lastly, there’s strength in numbers. There have been several other projects that have tried to address those points for containers, but once a project with a strong track record and corporate buy-in, coming from Google, hits the market, it gains momentum. In the open source ecosystem, momentum often begets more momentum, and that’s where all the necessary safeguards and controls get built out to make something ready for broader mass adoption.

 The task of managing massive, distributed, systems built on open source technologies

The complexity of managing large-scale open source system varies in direct relationship with the following characteristics:

    • How many discrete applications are deployed that leverage open source?
    • How many open source technologies are in use within those applications? Recent surveys show that most applications built on open source use around 100 open source components on average. 
    • How many open source technologies are in use within those applications? Recent surveys show that most applications built on open source use around 100 open source components on average. 
    • How many of those technologies do not have pre-built integrations for configuration and management?
    • How many are supported commercially?
    • How many are established, as opposed to the number that are groundbreaking or in the earliest adoption phase?

When you look at a multi-tier, distributed application built on multiple open source applications, the level of complexity can become overwhelming for teams that don’t have good, established processes for orchestration for both deployment and automated upkeep over time.

“In the open source ecosystem, momentum often begets more momentum”

Taking charge of captain Kubernetes

Most firms that have been building and running applications on open source for more than a few years do understand the complexities and challenges associated with this task, within limits.

Operating systems (Linux), web servers (Apache, Tomcat, etc), databases (MySQL, Postgres) and programming languages (Python, Java) are well established, and certain common reusable libraries have evolved alongside those. All of those technologies are known, and risks have been measured and charted along common axes, such as common vulnerabilities (CVE) and exposed access points.

However, building up that organisational expertise can be time-consuming and often requires hiring or tasking existing staff with this responsibility.

Security risks in dynamic environments

There are generally a few categories of security risks in these environments.

First and foremost, keeping packages up to date and having a global inventory is crucial. This package inventory can help you to quickly know whether CVEs are a potential risk to your organisation.

Secondly, since most of these dynamic environments are built on cloud technologies like Amazon Web Services (AWS) or Azure, or even on container-native platforms like Google Kubernetes Engine (GKE) that consist of fully virtualised, software-defined environments, the software definition becomes a critical security risk. According to the data breach database on Breach Level Index, at least 25% of breaches in 2018 occurred as a direct result of poor configuration management in a cloud environment.

Third, with the high rate of change, there is an additional potential security risk introduced with each change – namely, is the change properly coordinated across application components? Did the change introduce a potentially vulnerable point?

“Building up organisational expertise can be time-consuming and often requires hiring or tasking existing staff with this responsibility”

Finally, as more and more of these tools become oriented around web services and standards such as RESTful APIs, the communication paths (inbound/outbound ports) and access (authentication, data access levels) in API calls also becomes a significant security risk to consider.

The only acceptable level of security for any organisation is full security and mitigation of any known security risk points. First, enterprises need to know what they have, how to build security into the design of each piece of the stack, and how to monitor for security risk on an ongoing basis.

Looking forward

Kubernetes is likely to go through both a maturation process and a shakeout. Organisations that are early adopter enterprises will most likely “battle test” and vet 5-10 “mainstream” Kubernetes frameworks. They’ll figure out how to apply security and compliance controls, as well as governance standards, either as part of a Kubernetes fork, or as a combination of a Kubernetes project and some add-on tools and documentation.

RedHat’s OpenShift is an early example of the approach where 2-3 of the major projects will be sponsored by key players in the open source ecosystem. But there will continue to be some experimentation by SMBs in key innovation spaces, such as AI/ML and IoT, creating projects tailored for their use cases.

Our message to the community of enterprise Kubernetes adopters is the same as our message to the community of enterprises adopting cloud at a large scale. First, focus on gaining real-time, continuous visibility. You cannot protect it if you cannot see it.

Second, once you have visibility, focus on a policy-driven approach, where you can modify policies as needed. For instance, organisations can define policies around network access, encryption settings, whitelists or blacklists of components (open source libraries, virtual machine templates) or access policies (identity and access management, or IAM).

Third, ensure that you can adjust policies in real-time. When a new vulnerability is identified, how can you make a policy change to identify quickly resources or applications at risk, and then mitigate the risks?

Finally, work towards a vision of real-time response to problems. Reporting is not a solution to keeping the enterprise secure. Reporting is a task into itself, not a desired end-state. Real-time automated remediation is the key to achieving a continuously secure environment. That is a desired end-state.

Experts featured:

Brian Johnson

Cofounder & CEO
DivvyCloud

Tags:

developer kubernetes open shift security
Send us a correction Send us a news tip