Unpacked: NIST’s prototype for securing container apps in shared cloud environments
Mon 22 Feb 2021 | David Bisson
David Bisson says Trusted Compute Pools are key to understanding NIST’s Prototype for addressing a key IaaS security challenge
Organisations are increasingly shifting their resources to the public cloud. In November 2019, Gartner predicted that the worldwide public cloud services market would grow by 17% and reach $266.4 billion in 2020. The global research and advisory firm went on to forecast that Infrastructure as a Service (IaaS) in particular would increase 24% year over year to reach $50 billion 2020, thereby demonstrating the highest growth rate across all market segments.
So, why are so many organisations moving to the public cloud?
It seems that many are looking to enhance their security and data protection capabilities. Indeed, more than half (58%) of IT leaders and executives told Deloitte in a survey that security ranked at the top of their concerns for moving to the cloud. The British multinational professional services network provided an explanation for why this might be the case:
- “Cybersecurity attacks are rising in sophistication, and a shortage of skills means that many companies are struggling to manage security in-house. Some IT executives are turning to third-party cloud and managed security services, with cloud providers delivering sophisticated cyber capabilities and solutions and cloud offering the potential of helping to mitigate security incidents. These factors suggest that IT executives may be increasingly relying on the expertise of third-party cloud-based security and infrastructure providers to protect their data”
There’s just one problem: in moving to IaaS, organisations are taking on additional security risks.
For instance, there’s the issue of having servers located in different places. Lastline notes that having cloud servers hosted in more than one data centre could require organisations to implement additional firewalls and routing rules to handle their network traffic.
This creates complexity, the enemy of security. That’s especially the case if some of those data centres are located in other countries, as organisations then need to make sure that they’re following data protection regulations and/or compliance protocols. If their IaaS provider is non-compliant, organisations will then need to invest additional resources towards upholding those standards.
Additionally, many organisations need to contend with the issue of cloud-based apps not interfering with one another. They might have an interest in keeping some of their apps separate from one another because they might have different security requirements that could threaten their sensitive data if violated. They also might share cloud server space with a rival company and want to keep their assets separate.
These challenges raise the following question: how can organisations secure their application container deployments in multi-tenant cloud environments such as those described above?
A Prototype for Container Platform Security
The National Institute of Standards and Technology (NIST) recognises this need. That explains why it’s working on a draft for Internal Report 8320A, “Hardware-Enabled Security: 3 Container Platform Security Prototype.” In its report, NIST proposes a prototype that addresses these issues across three related stages.
Stage 0: Platform Attestation and Measured Worker Node Launch
In the beginning, organisations need to be able to trust the platform on which the container deployment is running. They can do this by creating what are known as trusted compute pools, groups of hardware in a data centre to which certain security policies apply. Organisations use attestation to deem a launch of the platform as a trusted node. They can add that node to the trusted compute pool and then manage the execution of apps and workloads.
NIST’s prototype specifically calls for the use of a cloud management server. This resource facilitates access to a platform’s BIOS and OS components that are stored within a server hardware security module after the platform has undergone a measured launch.
Additionally, it remotely monitors servers’ BIOS and OS components against a defined baseline and issues a notification if it detects configuration drift. At that point, administrators can use the cloud management server to take remediation actions, which could include powering down the affected server and/or updating the firmware.
This Stage of the prototype also entails assigning a secure asset tag to every server during the provisioning process. More on this in Stage 2.
Stage 1: Trusted Workload Placement
This Stage is all about ensuring that workloads launch only on trusted platforms. It uses the same architecture as Stage 0. Not only that, but like Stage 0, it also begins with a server performing a measured launch of a platform and the enhanced hardware-based security features storing the BIOS and OS measurements in a server hardware module.
It’s then that the server (functioning as a worker node) sends a quote containing signed hashes of those component measurements to the Trust Authority. That entity verifies the quote and passes along its attestation to the cloud management server. With that quote and other user requirements, the management server enforces workload policy requirements on a server functioning as a control node within the same cloud as the original server. That control node then launches workloads requiring trusted infrastructure on trusted server platforms only, with each server platform undergoing a period audit of its measurement values.
Stage 2: Asset Tagging and Trusted Location
The final stage builds on the previous two stages by empowering administrators to manage asset tag restrictions. Specifically, it calls upon administrators to periodically audit the asset tag of the cloud server platform against the organisation’s asset tag policy restrictions. This process helps to ensure that the server’s asset tagging complies with the organisation’s security policies.
How to Implement the Prototype
NIST provides several means by which organisations can set up their prototype. One of the ways they can do this is via Kubernetes. Along the way, however, they need to make sure that their Kubernetes configurations support their overall containers’ security. StackRox explains how that’s especially true as organisations’ environments continue to grow:
- “That’s because in a sprawling Kubernetes environment with several clusters spanning tens, hundreds, or even thousands of nodes, created by hundreds of different developers, manually checking the configurations is not feasible. And like all humans, developers can make mistakes – especially given that Kubernetes configuration options are complicated, security features are not enabled by default, and most of the community is learning how to effectively use components including Pod Security Policies and Security Context, Network Policies, RBAC, the API server, kubelet, and other Kubernetes controls.”
To learn more about Kubernetes security configurations, check out the platform’s security documentation here.