Cloud Native Buildpacks Security Assessment
Security reviewers: Andres Vega, Adith Sudhakar, Cole Kennedy, Daniel Papandrea, Daniel Tobin, Magno Logan, Matthew Giasa, Matt Jarvis.
Buildpacks team: Stephen Levine, Sambhav Kothari.
This document details the design goals and security implications of Cloud Native Buildpacks to aid in the security assessment by CNCF Security Technical Advisory Group.
Metadata
Software | All available code: https://github.com/buildpacks Core implementation: https://github.com/buildpacks/lifecycle Specification: https://github.com/buildpacks/spec |
Security Provider | No. A primary function of the Cloud Native Buildpacks tooling is to build container images that are secure, compliant, and up-to-date. However, security is one of many goals for the project. |
SBOM | Until SPDX SBOM is available: https://github.com/buildpacks/pack/blob/main/go.mod https://github.com/buildpacks/lifecycle/blob/main/go.mod https://github.com/buildpacks/libcnb/blob/main/go.mod https://github.com/buildpacks/imgutil/blob/main/go.mod https://github.com/buildpacks/registry-api/blob/main/Gemfile |
Security links
Doc | url |
---|---|
Buildpack API Security Considerations | https://github.com/buildpacks/spec/blob/main/buildpack.md#security-considerations |
Platform API Security Considerations | https://github.com/buildpacks/spec/blob/master/platform.md#security-considerations |
Default and optional configs | The default guidance for platform maintainers, including security trade-offs, is covered by the specification: https://github.com/buildpacks/spec |
Architectural Diagrams | https://docs.google.com/presentation/d/1G6slFtpHPjIx-JHzRXAjJcWnQx6a5mRgW6QO_3XqeQo/edit#slide=id.g6e655be570_3_2356 |
Overview
The Cloud Native Buildpacks project provides tooling to transform source code into container images using modular, reusable build functions called buildpacks. To accomplish this, the project takes advantage of advanced features in the OCI image standard that are underutilized by the Dockerfile model.
Background.
The original buildpack concept was conceived by Heroku in 2011. The Cloud Native Buildpacks project was initiated by Pivotal (now VMware) and Heroku in January 2018 and joined the Cloud Native Sandbox in October 2018. The project aims to unify the buildpack ecosystems with a platform-to-buildpack contract that is well-defined and that incorporates learnings from maintaining production-grade buildpacks for years at both Pivotal and Heroku. Cloud Native Buildpacks brings the benefits of a managed dependency stack to Container Native standards (like OCI images and Docker registries), while taking advantage of cutting-edge features like content-addressable image layers and cross-repo blob mounting to achieve scalability and security outcomes that are difficult to achieve with Dockerfiles.
Goal.
The goal of the Cloud Native Buildpacks (CNB) project is to transform source code into container images with a focus on developer productivity, container security, and day-2 operations involving container images at scale.
The security guarantees imparted to users are:
Container images generated by Cloud Native Buildpacks tooling meet a minimum standard of container security, for example:
- All processes must use a non-root UID/GID
- Build-time and runtime base images are always specified separately, so that build-time dependencies such as compilers are not included in the image
- Build-time and runtime environment variables are always specified separately, so that sensitive build-time configuration is not included in the image
Container images generated by Cloud Native Buildpacks tooling must be bit-for-bit reproducible when the build tooling provided by the buildpacks support reproducibility.
Container images generated by Cloud Native Buildpacks tooling must be “rebasable,” so that ABI-compatible OS packages with critical security patches may be upgraded without rebuilding application-level or runtime-level layers.
Container images generated by Cloud Native Buildpacks tooling must contain metadata about their dependencies for auditing purposes. (This is reliant on buildpack implementations to a degree, but metadata is mandatory to use certain API features.)
That the container build process must be usable with untrusted application code and buildpack code inside of a controlled infrastructure environment. This implies, but is not limited to -
- All containers running CNB tooling must run without any capabilities or privileges.
- All containers that may execute buildpack or app code must additionally run as a non-root user.
- Infrastructure credentials, such as VCS and registry credentials, must not be present in containers that execute buildpack or app code.
- CNB tooling must allow buildpacks to generate images without egress network traffic, i.e., buildpacks must be allowed to bundle language-specific runtimes and other dependencies so that egress traffic is unnecessary.
Non-goals. Non-goals that a reasonable reader of the project’s literature could believe may be in scope (e.g., Flibble does not intend to stop a party with a key from storing an arbitrarily large amount of data, possibly incurring financial cost or overwhelming the servers)
The Cloud Native Buildpacks project does not provide language-specific buildpacks. Instead, individual communities and vendors provide buildpacks, which they may publish to the Buildpack Registry . Some examples include Paketo Buildpacks , Heroku Buildpacks , and Google Cloud Buildpacks . The goal of the Cloud Native Buildpacks project is to provide only the specification and tooling that make it easy for platforms to implement building images using buildpacks.
Intended Use
Cloud Native Buildpacks may be used to build container images in any cloud environment that supports running OCI or Docker v2 images and has access to a Docker v2 registry. Platform Maintainers may use the Cloud Native Buildpacks tooling to achieve this outcome.
Cloud Native Buildpacks may be used to run buildpack builder images on source code to create application container images. Buildpack Maintainers may use the Cloud Native Buildpacks tooling to develop and deploy their buildpack builder images. Application Developers may use the buildpack builder images with Cloud Native Buildpacks tooling to create application container images.
The Cloud Native Buildpacks lifecycle, which allows platforms to implement the Buildpack API, executes in a series of containers. Platforms are encouraged to execute these containers without any host privileges or capabilities, and are encouraged to execute containers that require infrastructure access (i.e., access to Docker registries or VCS) in containers that don’t execute buildpack or application code. Additionally, containers that execute buildpack or app code are always run with a non-zero in-container UID and GID. The lifecycle is designed to facilitate this mode of operation by default. See the “Project Design” section below for a more complete description of the operation aspects.
Project Design
Additional architectural diagrams are available in this presentation .
An ideal example of how a platform might implement an application build using lifecycle could be:
- Detection runs in a user-provided builder image using the non-zero UID and GID specified in the builder image labels. A structured list of buildpacks is produced in a TOML file that is left in a dedicated volume.
- Analysis runs in a platform-provided image using in-container UID zero and GID zero to restore layer metadata into a dedicated volume. Registry credentials are provided for this stage, but no user code is executed.
- Restore runs in a platform-provided image using in-container UID zero and GID zero to restore layers into a dedicated volume. Registry credentials are provided for this stage, but no user code is executed.
- Build runs in a user-provided builder image using the non-zero UID and GID specified in the builder image labels. A structured list of buildpacks is provided, and layer directories are produced in a volume owned by the aforementioned UID and GID. After buildpacks are executed, file and directory timestamps are set to a fixed, non-zero time in 1980 (for OS compatibility) so that the image is reproducible.
- Export runs in a platform-provided image using in-container UID zero and GID zero to transfer newly-generated layers from the dedicated volume to the Docker registry. Registry credentials are provided for this stage, but no user code is executed. The image creation date is set to a fixed, non-zero time in 1980 (for OS compatibility) so that it is reproducible.
An ideal example of how a platform might implement security patching at scale using lifecycle could be:
- A new base runtime image is uploaded to the Docker registry by a platform operator. This new base runtime image contains only ABI-compatible security patches to LTS bits.
- Rebase executes inside or outside of a containerized environment to modify each application image manifest that uses an older copy of the base runtime image so that it points to the new base runtime.
- All modified images are deployed using their new image digests, which have changed due to the updated manifest pointer. During the deploy, only a single copy of the new base runtime image is transferred to each VM node. No other image layers are transferred during the deploy process.
Configuration and Set-Up
The default guidance for platform maintainers, including security trade-offs, is covered by the specification: https://github.com/buildpacks/spec
User-facing documentation for buildpack maintainers and application developers is available here: https://buildpacks.io/docs/
The current default recommendations for lifecycle configuration by platforms assume that all buildpack and application code is untrusted. These recommendations sacrifice performance to achieve complete isolation of credentials and privileges from untrusted code. This RFC paves the way to provide more flexibility for single-user platforms.
- Secure
More detailed security considerations are addressed in the specification:
Project Compliance
Compliance
While the Cloud Native Buildpacks tooling itself is not documented to meet specific security standards, it facilitates implementing those standards in ways that are unique to CNB as a container build solution.
For example:
- CNB tooling allows platform maintainers to enforce the use of a certified base image (e.g., meeting DISA STIG) that can be patched for many applications, including pre-built applications, without changing application behavior.
- CNB tooling allows buildpack maintainers to provide metadata that conforms to security standards. For example, dependencies installed in the image may be described in image metadata using NVD identifiers.
Security Analysis
Attacker Motivations
An attacker might interact with CNB tooling or specification when motivated by these direct outcomes:
Supply-chain compromise
- Comprising the CNB project tooling or third-party buildpacks
- Providing malicious buildpacks, builders, or build extensions
System intrusion via build infrastructure
- Taking advantage of flaws in CNB tooling, such as inadequate isolation of untrusted code, to compromise build infrastructure.
- Taking advantage of indirect platform misconfigurations, such as lax egress policies, to access internal resources
System intrusion via built applications
- Taking advantage of flaws in running application images that are due to deficiencies in the way the CNB tooling or buildpacks built them.
These direct outcomes may be used to achieve second-order outcomes such as data exfiltration, denial of service, etc.
Predisposing Conditions
- Platform maintainers may fail to isolate untrusted application and/or buildpack code from registry credentials by running certain build stages (such as detect or build) with credentials present in the environment.
- Platform maintainers may use privileged containers, or containers with unnecessary capabilities, to run untrusted buildpack or application code. This could lead to inappropriate changes to host configuration or container breakout when combined with additional vulnerabilities.
- Platform maintainers may use build-time containers with lax egress networking policies that allow access to internal subnets to run untrusted buildpack or app code. This could lead to compromise of internal systems.
- Buildpack maintainers may provide vulnerable dependencies to the application or misconfigure the application.
- A malicious actor may re-distribute safe, third-party buildpacks in a builder that contains a modified lifecycle. Application developers may not realize that certain non-buildpack components of the builder are exposed to registry credentials. This is actively being addressed.
Expected Attacker Capabilities
When unmodified CNB tooling is properly configured by a platform maintainer, we assume that an attacker may be able to compromise an application by providing malicious buildpacks, stacks, or application extensions or by taking advantage of those bits when they are vulnerable or improperly configured. However, we assume that an attacker is unable to attain registry credentials to compromise other images.
When CNB tooling is improperly configured or the tooling itself is compromised, we assume that an attacker may be able to compromise any number of applications on the registry, build infrastructure that is exposed to untrusted code, and supply chains involving compromised images.
Attack Risks and Effects
Supply-chain attacks are incredibly risky. Many enterprises rate them as potential company-ending-events. Not only could they lead to complete compromise of any data or infrastructure systems that application code has access to, but they could lead to compromise of customer systems when compromised products are distributed to customers in the form of pre-built images.
Application vulnerabilities introduced by outdated buildpacks or stacks present a level of risk that is quantified by the CVSS scale for a given vulnerability.
Build system vulnerabilities could lead to risky supply-chain attacks, but they could also lead to less risky scenarios such as denial of service or improper use of resources.
Security Degradation
If an attacker is able to obtain registry credentials, then all applications on the registry may be compromised. However, build infrastructure would not necessarily be compromised unless it executes images on the registry with privileges.
If an attacker is able to provide malicious buildpacks or stack images, then all applications built using those artifacts may be compromised. However, build infrastructure would not necessarily be compromised unless it executes those images with privileges.
If an attacker is able to compromise build infrastructure (e.g., via a container escape executed by malicious buildpacks, stack images, or application code; or by compromising images that comprise the build infrastructure itself), then all of the above mentioned degradations may apply.
Compensating Mechanisms
A properly configured CNB build executes with complete isolation of registry / VCS credentials and untrusted buildpack or application code. This means that platforms building untrusted applications with untrusted buildpacks should not be vulnerable to VCS or registry compromise.
Additionally, a CNB build may be executed in unprivileged containers with zero capabilities. This means that compromised buildpacks, stacks, application code, and/or CNB tooling cannot be used to compromise the host build system without a severe underlying kernel or hardware vulnerability.
Both buildpack code and application code execute with a non-zero UID and GID. This means that many OS-level files cannot be modified by untrusted code.
Secure Development Practices
Development Pipeline
- Automated testing is employed extensively throughout all code bases.
- Automated testing is enforced via CI systems (mostly Github Actions).
- All PRs require sub-team maintainer approval.
- All changes to the specification or RFCs for project-wide changes require super majority approval of the core team.
- Repositories use gosec and codeql for static analysis
- Repositories use dependabot to ensure dependencies secure
- More information: https://github.com/buildpacks/community
Communication Channels
Team members use Slack (slack.buildpacks.io) and Github (github.com/buildpacks) for all internal and inbound asynchronous communication. Internal, inboard, and outbound communication happens synchronously at twice-weekly working group meetings, which are open. Delicate topics (such as code of conduct violations) are discussed in private, maintainer-only slack channels. Some outbound communication happens over the CNCF CNB mailing list.
Ecosystem
Cloud Native Buildpacks tooling builds container images that can be deployed on all platforms that support container standards (OCI), including K8s. Additionally, Cloud Native Buildpack builds can be securely configured to execute on those platforms. As far as we know, CNB is the only vendor-neutral API for creating OCI images. CNB is a true, language-agnostic alternative to Dockerfiles.
The following vendors provide Cloud Native Buildpacks:
Additionally, CNB is adopted by the following platforms/tools within the Cloud Native Ecosystem:
- kpack
- Tanzu Build Service - CNB via kpack
- Weave Firekube - CNB via kpack
- Azure Spring Cloud - CNB via kpack
- Project Riff - CNB via kpack
- Cloud Foundry on K8s - CNB via kpack
- Azure Container Registry - CNB
- Salesforce Evergreen - CNB
- Google Cloud Run Button - CNB
- Google kf - CNB
- Google Skaffold - CNB
- Spring Boot - CNB
- Heroku - CNB
- GitLab - CNB
- Dokku - CNB
- Deft - CNB
- Porter - CNB
Security Issue Resolution
Responsible Disclosures Process
- Vulnerability Response Process. Who is responsible for responding to a report. What is the reporting process? How would you respond?
Documented here: https://github.com/buildpacks/.github/blob/master/SECURITY.md
Incident Response
Designated core team members decrypt and respond to reports within 24 hours. Vulnerabilities are patched and announced following responsible disclosure best practices.
Roadmap
Project Next Steps
Roadmaps for 2020, 2021:
- https://medium.com/buildpacks/cloud-native-buildpacks-2020-roadmap-b7e43876473a
- https://medium.com/buildpacks/cloud-native-buildpacks-2021-roadmap-e5ece588fc0
- https://github.com/buildpacks/community/blob/main/ROADMAP.md
As our platform options expand to include tools like BuildKit, it is important that we continue to provide strong isolation of credentials and privileged access from untrusted buildpack and application code. Additionally, as the project scope expands to include a hosted registry, it’s important to consider the registry as an attack vector. Security-specific next-steps:
- Always use project-provided image for analysis/export.
- Recommend separate build-time and run-time user IDs
- Registry is implemented as a git-based index (+ spec )
- Additional work on image reproducibility
CNCF Requests
We would welcome a third-party security review.
Appendix
- Known Issues Over Time
Security-related design issues come up occasionally and are addressed:
Automated testing is employed extensively.
Given that CNB is developer tooling, many common classes of security vulnerability (such that those applicable to a service) do not apply.
- CII Best Practices
Currently, Buildpacks is at the passing criteria in the Core Infrastructure Initiative (CII) best practices badging program..
- Case Studies
Numerous real-world commercial offerings employ CNB:
In general, these are commercial on-prem, private cloud, or SaaS offerings that provide end-user build functionality.
- Related Projects / Vendors
CNB is compares closely to the following technologies:
- Jib - Similar to CNB in that it constructs OCI images directly without a Dockerfile or Docker. Jib is Java-specific, but uses the same techniques as CNB without requiring build containers.
- Ko - Similar to CNB in that it constructs OCI images directly without a Dockerfile or Docker. Ko is Go-specific, but uses the same techniques as CNB without requiring build containers.
The following technologies can be used to build container images using Dockerfiles. Like CNB, they don’t require Docker. Unlike CNB, they are limited to Dockerfile-based workflows:
- Kaniko - Provides Dockerfile support using similar userspace techniques for container image generation.
- Buildah, podman, img, etc. - Provide Dockerfile support, and unlike CNB, require nested containers with user-namespacing.
In general, CNB often competes with Dockerfiles for developer mindshare. Compared to CNB, Dockerfiles do not allow at-scale security patching for pre-built images or declarative build definitions. They also generally require build logic to be present in each application. For more information, see buildpacks.io and this deck .
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.