<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Flux – Security Documentation</title><link>https://deploy-preview-2413--fluxcd.netlify.app/flux/security/</link><description>Recent content in Security Documentation on Flux</description><generator>Hugo -- gohugo.io</generator><language>en</language><atom:link href="https://deploy-preview-2413--fluxcd.netlify.app/flux/security/index.xml" rel="self" type="application/rss+xml"/><item><title>Flux: Security Best Practices</title><link>https://deploy-preview-2413--fluxcd.netlify.app/flux/security/best-practices/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://deploy-preview-2413--fluxcd.netlify.app/flux/security/best-practices/</guid><description>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>The Flux project strives to keep its components secure by design and by default.
This document aims to list all security-sensitive options or considerations that
must be taken into account when deploying Flux. And also serve as a guide for
security professionals auditing such deployments.&lt;/p>
&lt;p>Not all recommendations are required for a secure deployment. Some may impact the
convenience, performance or resources utilization of Flux. Therefore, use this in
combination with your own Security Posture and Risk Appetite.&lt;/p>
&lt;p>Some recommendations may overlap with Kubernetes security recommendations, to keep
this short and more easily maintainable, please refer to
&lt;a href="https://www.cisecurity.org/benchmark/kubernetes" target="_blank">Kubernetes CIS Benchmark&lt;/a>
for non Flux-specific guidance.&lt;/p>
&lt;p>For help implementing these recommendations, seek
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/support/#commercial-support">enterprise support&lt;/a>.&lt;/p>
&lt;h2 id="security-best-practices">Security Best Practices&lt;/h2>
&lt;p>The recommendations below are based on Flux&amp;rsquo;s latest version.&lt;/p>
&lt;h3 id="helm-controller">Helm Controller&lt;/h3>
&lt;h4 id="start-up-flags">Start-up flags&lt;/h4>
&lt;ul>
&lt;li>
&lt;p>Ensure controller was not started with &lt;code>--insecure-kubeconfig-exec=true&lt;/code>.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>KubeConfigs support the execution of a binary command to return the token required to authenticate against a Kubernetes cluster.&lt;/p>
&lt;p>This is very handy for acquiring contextual tokens that are time-bound (e.g. aws-iam-authenticator).&lt;br>
However, this may be open for abuse in multi-tenancy environments and therefore is disabled by default.&lt;/p>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;p>Check Helm Controller&amp;rsquo;s pod YAML for the arguments used at start-up:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-sh" data-lang="sh">&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>helm-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;/details>
&lt;/li>
&lt;li>
&lt;p>Ensure controller was not started with &lt;code>--insecure-kubeconfig-tls=true&lt;/code>.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>Disables the enforcement of TLS when accessing the API Server of remote clusters.&lt;/p>
&lt;p>This flag was created to enable scenarios in which non-production clusters need to be accessed via HTTP. Do not disable TLS in production.&lt;/p>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;p>Check Helm Controller&amp;rsquo;s pod YAML for the arguments used at start-up:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-sh" data-lang="sh">&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>helm-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;/details>
&lt;/li>
&lt;/ul>
&lt;h3 id="kustomize-controller">Kustomize Controller&lt;/h3>
&lt;h4 id="start-up-flags-1">Start-up flags&lt;/h4>
&lt;ul>
&lt;li>
&lt;p>Ensure controller was not started with &lt;code>--insecure-kubeconfig-exec=true&lt;/code>.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>KubeConfigs support the execution of a binary command to return the token required to authenticate against a Kubernetes cluster.&lt;/p>
&lt;p>This is very handy for acquiring contextual tokens that are time-bound (e.g. aws-iam-authenticator).&lt;/p>
&lt;p>However, this may be open for abuse in multi-tenancy environments and therefore is disabled by default.&lt;/p>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;p>Check Kustomize Controller&amp;rsquo;s pod YAML for the arguments used at start-up:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-sh" data-lang="sh">&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>kustomize-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;/details>
&lt;/li>
&lt;li>
&lt;p>Ensure controller was not started with &lt;code>--insecure-kubeconfig-tls=true&lt;/code>.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>Disables the enforcement of TLS when accessing the API Server of remote clusters.&lt;/p>
&lt;p>This flag was created to enable scenarios in which non-production clusters need to be accessed via HTTP. Do not disable TLS in production.&lt;/p>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;p>Check Kustomize Controller&amp;rsquo;s pod YAML for the arguments used at start-up:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-sh" data-lang="sh">&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>kustomize-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;/details>
&lt;/li>
&lt;li>
&lt;p>Ensure controller was started with &lt;code>--no-remote-bases=true&lt;/code>.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>By default the Kustomize controller allows for kustomize overlays to refer to external bases.
This has a performance penalty, as the bases will have to be downloaded on demand during each reconciliation.&lt;br>
When using external bases, there can&amp;rsquo;t be any assurances that the externally declared state won&amp;rsquo;t change.
In this case, the source loses its hermetic properties. Changes in the external bases will result in changes on the cluster, regardless of whether the source has been modified since the last reconciliation.&lt;/p>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;p>Check Kustomize Controller&amp;rsquo;s pod YAML for the arguments used at start-up:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-sh" data-lang="sh">&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>kustomize-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;/details>
&lt;/li>
&lt;/ul>
&lt;h4 id="secret-decryption">Secret Decryption&lt;/h4>
&lt;ul>
&lt;li>
&lt;p>Ensure Secret Decryption is enabled and secrets are not being held in Flux Sources in plaintext.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>The kustomize-controller has an auto decryption mechanism that can decrypt cipher texts on-demand at reconciliation time using an embedded implementation of
&lt;a href="https://github.com/mozilla/sops" target="_blank">SOPS&lt;/a>. This enables credentials (e.g. passwords, tokens) and sensitive information to be kept in an encrypted state in the sources.&lt;/p>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;ul>
&lt;li>Check for plaintext credentials stored in the Git Repository at both HEAD and historical commits. Auto-detection tools can be used for this such as
&lt;a href="https://github.com/zricethezav/gitleaks" target="_blank">GitLeaks&lt;/a>,
&lt;a href="https://github.com/trufflesecurity/trufflehog" target="_blank">Trufflehog&lt;/a> and
&lt;a href="https://github.com/owenrumney/squealer" target="_blank">Squealer&lt;/a>.&lt;/li>
&lt;li>Check whether Secret Decryption is properly enabled in each &lt;code>spec.decryption&lt;/code> field of the cluster&amp;rsquo;s &lt;code>Kustomization&lt;/code> objects.&lt;/li>
&lt;/ul>
&lt;/details>
&lt;/li>
&lt;/ul>
&lt;h2 id="additional-best-practices-for-shared-cluster-multi-tenancy">Additional Best Practices for Shared Cluster Multi-tenancy&lt;/h2>
&lt;h3 id="multi-tenancy-lock-down">Multi-tenancy Lock-down&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>Ensure &lt;code>helm-controller&lt;/code>, &lt;code>kustomize-controller&lt;/code>, &lt;code>notification-controller&lt;/code>, &lt;code>image-reflector-controller&lt;/code> and &lt;code>image-automation-controller&lt;/code> have cross namespace references disabled via &lt;code>--no-cross-namespace-refs=true&lt;/code>.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>Blocks references to Flux objects across namespaces. This assumes that tenants would own one or multiple namespaces, and should not be allowed to consume other tenant&amp;rsquo;s objects, as this could enable them to gain access to sources they do not (or should not) have access to.&lt;/p>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;p>Check the Controller&amp;rsquo;s YAML for the arguments used at start-up:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-sh" data-lang="sh">&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>helm-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>kustomize-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>notification-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>image-reflector-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>image-automation-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;/details>
&lt;/li>
&lt;li>
&lt;p>Ensure &lt;code>helm-controller&lt;/code> and &lt;code>kustomize-controller&lt;/code> have a default service account set via &lt;code>--default-service-account=&amp;lt;service-account-name&amp;gt;&lt;/code>.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>Enforces all reconciliations to impersonate a given Service Account, effectively disabling the use of the privileged service account that would otherwise be used by the controller.&lt;/p>
&lt;p>Tenants must set a service account for each object that is responsible for applying changes to the Cluster (i.e.
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/components/helm/helmreleases/#enforcing-impersonation">HelmRelease&lt;/a> and
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/components/kustomize/kustomizations/#enforcing-impersonation">Kustomization&lt;/a>), otherwise Kubernetes&amp;rsquo;s API Server will not authorize the changes. NB: It is recommended that the default service account used has no permissions set to the control plane.&lt;/p>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;p>Check the Controller&amp;rsquo;s YAML for the arguments used at start-up:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-sh" data-lang="sh">&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>helm-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>kustomize-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;/details>
&lt;/li>
&lt;li>
&lt;p>Ensure all Flux controllers have default service accounts set for workload identity authentication via the &lt;code>--default-service-account=&amp;lt;service-account-name&amp;gt;&lt;/code>, &lt;code>--default-decryption-service-account=&amp;lt;service-account-name&amp;gt;&lt;/code> and &lt;code>--default-kubeconfig-service-account=&amp;lt;service-account-name&amp;gt;&lt;/code> flags.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>In multi-tenant environments, workload identity authentication should be locked down to force tenant permissions used in cloud provider integrations to be provisioned following the Principle of Least Privilege. This ensures proper isolation between tenants with regards to ownership of cloud resources. This is separate from the Kubernetes RBAC impersonation controls mentioned above.&lt;/p>
&lt;p>Setting default service accounts ensures that when Flux resources don&amp;rsquo;t specify a service account for workload identity authentication, they fall back to a controlled default expected to exist in the resource&amp;rsquo;s namespace, i.e. in the tenant&amp;rsquo;s namespace.&lt;/p>
&lt;p>The workload identity default service account flags are &lt;code>--default-decryption-service-account&lt;/code> and &lt;code>--default-kubeconfig-service-account&lt;/code> for &lt;code>kustomize-controller&lt;/code>, &lt;code>--default-kubeconfig-service-account&lt;/code> for &lt;code>helm-controller&lt;/code>, and &lt;code>--default-service-account&lt;/code> for &lt;code>source-controller&lt;/code>, &lt;code>notification-controller&lt;/code>, &lt;code>image-reflector-controller&lt;/code> and &lt;code>image-automation-controller&lt;/code>.&lt;/p>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;p>Check all Flux controllers for workload identity default service account flags (&lt;code>--default-service-account&lt;/code>, &lt;code>--default-decryption-service-account&lt;/code>, &lt;code>--default-kubeconfig-service-account&lt;/code>):&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-sh" data-lang="sh">&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>source-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>kustomize-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>helm-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>notification-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>image-reflector-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>image-automation-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;/details>
&lt;/li>
&lt;/ul>
&lt;h3 id="secret-decryption-1">Secret Decryption&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>Ensure Secret Decryption is configured correctly, such that each tenant have the correct level of isolation.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>The secret decryption configuration must be aligned with the level of isolation required across tenants.&lt;/p>
&lt;ul>
&lt;li>For higher isolation, each tenant must have their own Key Encryption Key (KEK) configured. Note that the access controls to the aforementioned keys must also be aligned for better isolation.&lt;/li>
&lt;li>For lower isolation requirements, or for secrets that are shared across multiple tenants, cluster-level keys could be used.&lt;/li>
&lt;/ul>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;ul>
&lt;li>Check whether the Secret Provider configuration is security hardened. Please seek
&lt;a href="https://github.com/mozilla/sops" target="_blank">SOPS&lt;/a> and
&lt;a href="https://github.com/bitnami-labs/sealed-secrets" target="_blank">SealedSecrets&lt;/a> documentation for how to best implement each solution.&lt;/li>
&lt;li>When SealedSecrets are employed, pay special attention to the scopes being used.&lt;/li>
&lt;/ul>
&lt;/details>
&lt;/li>
&lt;/ul>
&lt;h3 id="resource-isolation">Resource Isolation&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>Ensure additional Flux instances are deployed when mission-critical tenants/workloads must be assured.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>Sharing the same instances of Flux Components across all tenants including the Platform Admin, will lead to all reconciliations competing for the same resources. In addition, all Flux objects will be placed on the same queue for reconciliation which is limited by the number of workers set by each controller (i.e. &lt;code>--concurrent=20&lt;/code>), which could cause reconciliation intervals not to be accurately honored.&lt;/p>
&lt;p>For improved reliability, additional instances of Flux Components could be deployed, effectively creating separate &amp;ldquo;lanes&amp;rdquo; that are not disrupted by noisy neighbors. An example of this approach would be having additional instances of both Kustomize and Helm controllers that focuses on applying platform level changes, which do not compete with Tenants changes.&lt;/p>
&lt;p>Running multiple Flux instances within the same cluster is supported by means of sharding, please consult the
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/cheatsheets/sharding/">Flux sharding and horizontal scaling documentation&lt;/a> for more details.&lt;/p>
&lt;p>To avoid conflicts among controllers while attempting to reconcile Custom Resources, controller types (e.g. &lt;code>source-controller&lt;/code>) must have be configured with unique label selectors in the &lt;code>--watch-label-selector&lt;/code> flag.&lt;/p>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;p>Check for the existence of additional Flux controllers instances and their respective scopes. Each controller must be started with &lt;code>--watch-label-selector&lt;/code> and have the selector point to unique label values:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-sh" data-lang="sh">&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>kustomize-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>helm-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>kubectl describe pod -n flux-system -l &lt;span style="color:#bb60d5">app&lt;/span>&lt;span style="color:#666">=&lt;/span>source-controller | grep -B &lt;span style="color:#40a070">5&lt;/span> -A &lt;span style="color:#40a070">10&lt;/span> Args
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;/details>
&lt;/li>
&lt;/ul>
&lt;h3 id="node-isolation">Node Isolation&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>Ensure worker nodes are not being shared across tenants and the Flux components.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>Pods sharing the same worker node may enable threat vectors which might enable a malicious tenant to have a negative impact on the Confidentiality, Integrity or Availability of the co-located pods.&lt;/p>
&lt;p>The Flux components may have Control Plane privileges while some tenants may not. A co-located pod could leverage its privileges in the shared worker node to bypass its own Control Plane access limitations by compromising one of the co-located Flux components. For cases in which cross-tenant isolation requirements must be enforced, the same risks apply.&lt;/p>
&lt;p>Employ techniques to enforce that untrusted workloads are sandboxed. And, ensure that worker nodes are only shared when within the acceptable risks by your security requirements.&lt;/p>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;ul>
&lt;li>Check whether you adhere to
&lt;a href="https://kubernetes.io/docs/concepts/security/multi-tenancy/#node-isolation" target="_blank">Kubernetes Node Isolation Guidelines&lt;/a>&lt;/li>
&lt;li>Check whether there are Admission Controllers/OPA blocking tenants from creating privileged containers.&lt;/li>
&lt;li>Check whether
&lt;a href="https://kubernetes.io/docs/concepts/containers/runtime-class/" target="_blank">RuntimeClass&lt;/a> is being employed to sandbox workloads that may be scheduled in shared worker nodes.&lt;/li>
&lt;li>Check whether
&lt;a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" target="_blank">Taints and Tolerations&lt;/a> are being used to decrease the likelihood of sharing worker nodes across tenants, or with the Flux controllers. Some cloud providers have this encapsulated as Node Pools.&lt;/li>
&lt;/ul>
&lt;/details>
&lt;/li>
&lt;/ul>
&lt;h3 id="network-isolation">Network Isolation&lt;/h3>
&lt;ul>
&lt;li>
&lt;p>Ensure the Container Network Interface (CNI) being used in the cluster supports Network Policies.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>Flux relies on Network Policies to ensure that only Flux components have direct access to the source artifacts kept in the Source Controller.&lt;/p>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;ul>
&lt;li>Check whether you adhere to
&lt;a href="https://kubernetes.io/docs/concepts/security/multi-tenancy/#network-isolation" target="_blank">Kubernetes Network Isolation Guidelines&lt;/a>&lt;/li>
&lt;li>Confirm that the
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/flux-e2e/#fluxs-default-configuration-for-networkpolicy">Network Policy&lt;/a> objects created by Flux are being enforced by the CNI. Alternatively, run a tool such as
&lt;a href="https://github.com/mattfenwick/cyclonus" target="_blank">Cyclonus&lt;/a> or
&lt;a href="https://github.com/vmware-tanzu/sonobuoy" target="_blank">Sonobuoy&lt;/a> to validate NetworkPolicy enforcement by the CNI plugin on your cluster.&lt;/li>
&lt;/ul>
&lt;/details>
&lt;/li>
&lt;/ul>
&lt;h2 id="additional-best-practices-for-tenant-dedicated-cluster-multi-tenancy">Additional Best Practices for Tenant Dedicated Cluster Multi-tenancy&lt;/h2>
&lt;ul>
&lt;li>
&lt;p>Ensure tenants are not able to revoke Platform Admin access to their clusters.&lt;/p>
&lt;details>
&lt;summary>Rationale&lt;/summary>
&lt;p>In environments in which a management cluster is used to bootstrap and manage other clusters, it is important to ensure that a tenant is not allowed to revoke access from the Platform Admin, effectively denying the Management Cluster the ability to further reconcile changes into the tenant&amp;rsquo;s Cluster.&lt;/p>
&lt;p>The Platform Admin should make sure that at the tenant’s cluster bootstrap process, this is taken into the account and a breakglass procedure is in place to recover access without the need to rebuild the cluster.&lt;/p>
&lt;/details>
&lt;details>
&lt;summary>Audit Procedure&lt;/summary>
&lt;ul>
&lt;li>Check whether alerts are in place in case the Remote Apply operations fails.&lt;/li>
&lt;li>Check the permission set given to the tenant&amp;rsquo;s users and applications is not overly privileged.&lt;/li>
&lt;li>Check whether there are Admission Controllers/OPA rules blocking changes in Platform Admin&amp;rsquo;s permissions and overall resources.&lt;/li>
&lt;/ul>
&lt;/details>
&lt;/li>
&lt;/ul></description></item><item><title>Flux: Contextual Authorization</title><link>https://deploy-preview-2413--fluxcd.netlify.app/flux/security/contextual-authorization/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://deploy-preview-2413--fluxcd.netlify.app/flux/security/contextual-authorization/</guid><description>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>Most cloud providers support context-based authorization, enabling applications
to benefit from strong access controls applied to a given context (e.g. Virtual
Machine), without the need of managing authentication tokens and credentials.&lt;/p>
&lt;p>For example, by granting a given Virtual Machine (or principal that such machine
operates under) access to AWS S3, applications running inside that machine can
request a token on-demand, which would grant them access to the AWS S3 buckets
without having to store long lived credentials anywhere.&lt;/p>
&lt;p>By leveraging such capability, Flux users can focus on the big picture, which is
access controls enforcement with a least-privileged approach, whilst not having to
do deal with security hygiene topics such as encrypting authentication secrets, and
ensure they are being rotated regularly.
All that is taken care of automatically by the cloud providers, as the tokens provided
are context- and time-bound.&lt;/p>
&lt;h2 id="current-support">Current Support&lt;/h2>
&lt;p>Below is a list of Flux features that support this functionality and their documentation:&lt;/p>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Status&lt;/th>
&lt;th>Component&lt;/th>
&lt;th>Feature&lt;/th>
&lt;th>Provider&lt;/th>
&lt;th>Ref&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Source Controller&lt;/td>
&lt;td>GitRepository Authentication&lt;/td>
&lt;td>Azure&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/azure/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Source Controller&lt;/td>
&lt;td>Bucket Authentication&lt;/td>
&lt;td>AWS&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/aws/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Source Controller&lt;/td>
&lt;td>Bucket Authentication&lt;/td>
&lt;td>Azure&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/azure/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Source Controller&lt;/td>
&lt;td>Bucket Authentication&lt;/td>
&lt;td>GCP&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/gcp/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Source Controller&lt;/td>
&lt;td>OCIRepository Authentication&lt;/td>
&lt;td>AWS&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/aws/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Source Controller&lt;/td>
&lt;td>OCIRepository Authentication&lt;/td>
&lt;td>Azure&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/azure/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Source Controller&lt;/td>
&lt;td>OCIRepository Authentication&lt;/td>
&lt;td>GCP&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/gcp/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Source Controller&lt;/td>
&lt;td>&lt;code>oci&lt;/code> HelmRepository Authentication&lt;/td>
&lt;td>AWS&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/aws/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Source Controller&lt;/td>
&lt;td>&lt;code>oci&lt;/code> HelmRepository Authentication&lt;/td>
&lt;td>Azure&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/azure/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Source Controller&lt;/td>
&lt;td>&lt;code>oci&lt;/code> HelmRepository Authentication&lt;/td>
&lt;td>GCP&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/gcp/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Kustomize Controller&lt;/td>
&lt;td>SOPS Integration with KMS&lt;/td>
&lt;td>AWS&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/aws/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Kustomize Controller&lt;/td>
&lt;td>SOPS Integration with Key Vault&lt;/td>
&lt;td>Azure&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/azure/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Kustomize Controller&lt;/td>
&lt;td>SOPS Integration with KMS&lt;/td>
&lt;td>GCP&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/gcp/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Kustomize Controller&lt;/td>
&lt;td>Remote EKS Cluster Authentication&lt;/td>
&lt;td>AWS&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/aws/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Kustomize Controller&lt;/td>
&lt;td>Remote AKS Cluster Authentication&lt;/td>
&lt;td>Azure&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/azure/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Kustomize Controller&lt;/td>
&lt;td>Remote GKE Cluster Authentication&lt;/td>
&lt;td>GCP&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/gcp/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Helm Controller&lt;/td>
&lt;td>Remote EKS Cluster Authentication&lt;/td>
&lt;td>AWS&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/aws/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Helm Controller&lt;/td>
&lt;td>Remote AKS Cluster Authentication&lt;/td>
&lt;td>Azure&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/azure/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Helm Controller&lt;/td>
&lt;td>Remote GKE Cluster Authentication&lt;/td>
&lt;td>GCP&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/gcp/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Notification Controller&lt;/td>
&lt;td>Azure DevOps Commit Status Updates&lt;/td>
&lt;td>Azure&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/azure/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Notification Controller&lt;/td>
&lt;td>Azure Event Hubs&lt;/td>
&lt;td>Azure&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/azure/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Notification Controller&lt;/td>
&lt;td>Google Cloud Pub/Sub&lt;/td>
&lt;td>GCP&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/gcp/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Image Reflector Controller&lt;/td>
&lt;td>ImageRepository Authentication&lt;/td>
&lt;td>AWS&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/aws/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Image Reflector Controller&lt;/td>
&lt;td>ImageRepository Authentication&lt;/td>
&lt;td>Azure&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/azure/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Image Reflector Controller&lt;/td>
&lt;td>ImageRepository Authentication&lt;/td>
&lt;td>GCP&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/gcp/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Supported&lt;/td>
&lt;td>Image Automation Controller&lt;/td>
&lt;td>GitRepository Authentication&lt;/td>
&lt;td>Azure&lt;/td>
&lt;td>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/integrations/azure/">Guide&lt;/a>&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table></description></item><item><title>Flux: Secrets Management</title><link>https://deploy-preview-2413--fluxcd.netlify.app/flux/security/secrets-management/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://deploy-preview-2413--fluxcd.netlify.app/flux/security/secrets-management/</guid><description>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>Flux improves the application deployment process by continuously reconciling a
desired state, defined at a source, against a target cluster. One of the challenges
with this process is its dependency on secrets, that must not be stored in plain-text
like the rest of the desired state.&lt;/p>
&lt;p>Secrets are sensitive information that an application needs to operate such as:
credentials, passwords, keys, tokens and certificates. Managing secrets declaratively
needs to be done right because of its broad security implications.&lt;/p>
&lt;p>We will cover the mechanisms supported by Flux, as well as the security principles,
concerns and techniques to consider when managing secrets with Flux.&lt;/p>
&lt;h3 id="whats-inside-the-toolbox">What&amp;rsquo;s inside the toolbox?&lt;/h3>
&lt;p>First of all, let&amp;rsquo;s go through the different options supported by Flux and Kubernetes.&lt;/p>
&lt;p>Nowadays there are a multitude of secret management options. Some are available in-cluster,
directly in the comfort of your Kubernetes cluster, and others that are provided from
out-of-cluster, for example a Cloud based KMS.&lt;/p>
&lt;h4 id="kubernetes-secrets">Kubernetes Secrets&lt;/h4>
&lt;p>Kubernetes has a
&lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/" target="_blank">built-in mechanism&lt;/a> to store and manage secrets. The secrets
are stored in etcd either in plain-text or
&lt;a href="https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/" target="_blank">encrypted&lt;/a>.&lt;/p>
&lt;p>They are the vanilla offering, which is used during &lt;code>flux bootstrap&lt;/code>, for example, to store your
SSH Deploy Keys. Unless when the initial Flux source supports
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/security/contextual-authorization/">contextual authorization&lt;/a>,
in which case no secrets are required.&lt;/p>
&lt;p>Storing plain-text secrets in your desired state is not recommended, so apart from the secret
used to authenticate against your initial source, Flux users should not manage these. Instead,
they should rely mostly on other mechanisms covered below.&lt;/p>
&lt;h4 id="secrets-decryption-operators">Secrets Decryption Operators&lt;/h4>
&lt;p>Sometimes referred to as Encrypted Secrets, Secrets Decryption Operators enable secrets to be stored
in ciphertext as Kubernetes resources within a Flux source. They are deployed into the cluster by
Flux in their original CustomResourceDefinition (CRD) form, which is later used by its Secret
Decryption Operator to decrypt those secrets and generate a Kubernetes Secret.&lt;/p>
&lt;p>This is transparent to the consuming applications, making it a quite suitable approach to retrofit
into an existing setup. An example of a Secret Decryption Operator is
&lt;a href="https://github.com/bitnami-labs/sealed-secrets" target="_blank">Sealed Secrets&lt;/a>.&lt;/p>
&lt;p>Storing encrypted secrets in Git repositories enables configuration versioning to leverage the
same practices used for managing, versioning and releasing application and/or infrastructure
when using declarative &amp;ldquo;everything as code&amp;rdquo;, for example pull-requests, tags and branch
strategies.&lt;/p>
&lt;p>Note that some sources may keep a history of the encrypted Secrets (e.g. &lt;code>GitRepository&lt;/code>)
through time. Increasing the impact when old encryption keys are leaked, especially when
other security measures are not in place (e.g. secret rotation) or when long-lived secrets
are being handled (e.g. Public TLS Certs).&lt;/p>
&lt;p>Notice that secrets can be stored in any Source type supported by Flux, such as
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/components/source/buckets/">Buckets&lt;/a> and
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/components/source/ocirepositories/">OCI repositories&lt;/a>.&lt;/p>
&lt;p>Flux specific guides on using Secrets Decryption Operators:&lt;/p>
&lt;ul>
&lt;li>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/guides/sealed-secrets/">Bitnami Sealed Secrets&lt;/a>&lt;/li>
&lt;/ul>
&lt;h4 id="using-flux-to-decrypt-secrets-on-demand">Using Flux to decrypt Secrets on-demand&lt;/h4>
&lt;p>Flux has the ability to decrypt secrets stored in Flux sources by itself, without the need of
additional controllers installed in the cluster. The approach relies on keeping in Flux sources
encrypted Kubernetes Secrets, which are decrypted on-demand with
&lt;a href="https://github.com/mozilla/sops" target="_blank">SOPS&lt;/a>, just before they are
deployed into the target clusters.&lt;/p>
&lt;p>This approach is more flexible than using
&lt;a href="https://github.com/bitnami-labs/sealed-secrets" target="_blank">Sealed Secrets&lt;/a>, as
&lt;a href="https://github.com/mozilla/sops" target="_blank">SOPS&lt;/a> supports cloud-based Key
Management Services of the major cloud providers (Azure KeyVault, GCP KMS and AWS KMS), HashiCorp
Vault, as well as &amp;ldquo;off-line&amp;rdquo; decryption using Age and PGP.&lt;/p>
&lt;p>This mechanism supports
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/components/kustomize/kustomizations/#kustomize-secretgenerator">kustomize-secretgenerator&lt;/a> which ensures that dependent workloads will
reload automatically and start using the latest version of the secret. Notice that most approaches
that are based on Kubernetes Secrets would require something like
&lt;a href="https://github.com/stakater/Reloader" target="_blank">stakater/Reloader&lt;/a> to achieve
the same result. The
&lt;a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomizations/#secretgenerator" target="_blank">Kubernetes blog&lt;/a> explains quite well how this works.&lt;/p>
&lt;p>The security concerns of this approach are similar to the Secrets Decryption Operators, but with
the added benefit that no additional controllers are required, therefore reducing resources consumption
and the attack surface. When using external providers (e.g. KMS, Vault), remember that they can become
a single point of failure, if they are deleted by mistake (or unavailable by extended periods) this
could impact your solution.&lt;/p>
&lt;p>Flux supports the two main names in Encrypted Secrets and has specific how-to guides for them:&lt;/p>
&lt;ul>
&lt;li>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/guides/mozilla-sops/">Mozilla SOPS Guide&lt;/a>&lt;/li>
&lt;li>
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/components/kustomize/kustomizations/#decryption">Secrets decryption&lt;/a>&lt;/li>
&lt;/ul>
&lt;h4 id="secrets-synchronized-by-operators">Secrets Synchronized by Operators&lt;/h4>
&lt;p>The source of truth for your secrets can reside outside of the cluster, and then be synchronised
into the cluster as Kubernetes Secrets by operators. Much like encrypted secrets, this process
is transparent to the workloads in the cluster.&lt;/p>
&lt;p>Two examples of this type of operator are
&lt;a href="https://github.com/1Password/onepassword-operator" target="_blank">1Password Operator&lt;/a> and
&lt;a href="https://github.com/external-secrets/external-secrets" target="_blank">External Secrets Operator&lt;/a>.
But given their nature, Flux is able to support any operator that manages Kubernetes secrets.&lt;/p>
&lt;p>This approach provides a level of redundancy by default, as secrets are kept at both the cluster
and the remote source, so small failures can go undetected. It supports hybrid workloads
quite well, when some secrets have to be shared with applications that are not Kubernetes-based.&lt;/p>
&lt;p>When using mutable secrets, it could be hard for Flux or the dependent applications to know
whether they are using the latest version of a given secret. In such cases, immutable secrets,
where the name also contains the version of the secret, may help.&lt;/p>
&lt;p>Take into account the loading times when provisioning a new cluster, as that can become a
bottleneck slowing down the provisioning time as the number of secrets increases.&lt;/p>
&lt;p>Flux supports all operators that provide this functionality.&lt;/p>
&lt;h4 id="secrets-mounted-via-csi-drivers">Secrets mounted via CSI Drivers&lt;/h4>
&lt;p>Another way to bring external secrets into Kubernetes, is the use of CSI Drivers,
which mounts secrets as files directly into a Pod filesystem, instead of generating
native Kubernetes Secrets.&lt;/p>
&lt;p>Due to the way it works, the secrets are not accessible within the Kubernetes Control
Plane, so although you can use it with your workloads, it won&amp;rsquo;t work when providing
to CustomResourceTypes (CRDs) that need a reference to a secret
(e.g. &lt;code>.spec.secretRef.name&lt;/code> in &lt;code>GitRepository&lt;/code>).&lt;/p>
&lt;p>With CSI Drivers, the mounting takes place at Pod starting time, so issues accessing
the external source of the secrets may be more impactful.&lt;/p>
&lt;p>Here are a few CSI providers:&lt;/p>
&lt;ul>
&lt;li>
&lt;a href="https://github.com/hashicorp/vault-csi-provider" target="_blank">HashiCorp Vault&lt;/a>&lt;/li>
&lt;li>
&lt;a href="https://docs.microsoft.com/en-us/azure/aks/csi-secrets-store-driver" target="_blank">Azure KeyVault&lt;/a>&lt;/li>
&lt;li>
&lt;a href="https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_csi_driver.html" target="_blank">AWS Secrets Manager&lt;/a>&lt;/li>
&lt;li>
&lt;a href="https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp" target="_blank">GCP CSI Driver&lt;/a>&lt;/li>
&lt;/ul>
&lt;h4 id="direct-access-to-out-of-cluster-secrets">Direct access to out-of-cluster Secrets&lt;/h4>
&lt;p>Direct access to a secret management solution that resides outside of a Kubernetes
cluster is also an option. Which could be a useful alternative when lifting and
shifting legacy applications that already depend on such approach.&lt;/p>
&lt;p>Here the secret management solution will become a single point of failure,
expect issues when it goes temporarily unavailable and make sure to have disaster recovery plans.
Also observe throttling limits of cloud solutions, given that different applications
may be targeting the same Secret Manager, without a rate limiter across all of them,
this could easily lead to an outage at scale.&lt;/p>
&lt;p>Flux currently does not directly fetch secrets from out-of-cluster solutions, in the
same way that most Kubernetes native tools don&amp;rsquo;t, therefore this approach may need to
be combined with things such as Secrets Synchronized by Operators. However, this will
not block the ability of your applications to do so.&lt;/p>
&lt;h3 id="big-picture---things-to-consider">Big Picture - Things to consider&lt;/h3>
&lt;p>Once you are aware of the different tools in the toolbox, it is important to align them
with your actual requirements, taking into account some key points:&lt;/p>
&lt;h4 id="expiration-and-rotation">Expiration and Rotation&lt;/h4>
&lt;p>Secrets should have an expiration time, and ideally such expiration should be enforced,
so that the potential of leakage has a well-defined risk window.&lt;/p>
&lt;p>To facilitate the uninterrupted use of the dependent applications, rotation should be
automated taking into account that at times different versions of the same secret
(old and new) may need to be supported at the same time - e.g. whilst validating a
new version of the application that is being deployed.&lt;/p>
&lt;p>Both secrets can remain active during a time window, but once the new version is
validated after deployment, the previous secrets can be safely decommissioned.
Cloud KMS solutions tend to provide secret versioning built-in.&lt;/p>
&lt;h4 id="access-management-and-auditing">Access Management and Auditing&lt;/h4>
&lt;p>Access to secrets should be restricted to the servers and applications within the environment
they need to be accessed. The same goes for users and service accounts.&lt;/p>
&lt;p>When considering the different solutions, it is important to note how they hang together
and what the gaps are. If you have a strong requirement for access and auditing controls,
having a well-defined api-server auditing in place, together with tight RBAC policies in your
cluster is only part of the problem. Also take into account how those secrets are sourced,
stored and handled. Maybe having secrets stored (even if in encrypted form) in an easily
accessible Flux source that has a loosely defined RBAC and no auditing in place may not meet
such requirements.&lt;/p>
&lt;h4 id="least-privileged-and-segregation-of-duties">Least Privileged and Segregation of duties&lt;/h4>
&lt;p>The scope of each secret must be carefully considered to decrease the blast radius in
case of breach. A trade-off must be reached to attain a balance between the two extremes:
having a single secret that has all the access, versus having too many secrets that are
always used in combination.&lt;/p>
&lt;p>Sharing the same secret across different scopes, just because they have the same permissions
may lead to disruption if such secret needs to be quickly rotated.&lt;/p>
&lt;h4 id="disaster-recovery">Disaster Recovery&lt;/h4>
&lt;p>The entire provisioning of your infrastructure and application must take into account
break the glass procedures that are secure, provide relevant security controls (e.g. auditing)
and cannot be misused to bypass other processes (e.g. Access Management).&lt;/p>
&lt;p>Around disaster recovery scenarios, consider how they align with your Availability and
Confidentiality requirements.&lt;/p>
&lt;h4 id="dont-co-locate-ciphertext-with-encryption-keys">Don&amp;rsquo;t co-locate ciphertext with encryption keys&lt;/h4>
&lt;p>It should go without saying, but never place secrets together with keys that can provide privilege
escalation routes. For example, if you store the decryption key for your secrets in GitHub secrets,
and all your encrypted secrets are stored in the same repository, a single GitHub account
(with enough access) compromised is enough for all your secrets to be decrypted.&lt;/p>
&lt;p>Instead, segregate encryption keys from ciphertext and understand what needs to be compromised
for the data to be at risk.&lt;/p>
&lt;h4 id="single-points-of-failure">Single Points of Failure&lt;/h4>
&lt;p>Identify all potential single points of failure and ensure that there is a way around them.
If all your secrets are encrypted using an encryption key stored in Vault, and due to a major
failure your Vault instance is completely lost, and no backup is to be found, the encrypted
secrets are now useless. Therefore, think big picture, and ensure that each step of the way
has a redundancy and that process is regularly exercised.&lt;/p>
&lt;p>The same goes for temporary single points of failure. If you rely on a Key Hierarchy Architecture
based on a cloud KMS to provision an on-premises cluster/application, consider the impact
they would have in case of a failure pre, mid or post deploy (of either cluster or applications).&lt;/p>
&lt;h4 id="ephemeral-or-single-use-secrets">Ephemeral or Single-use Secrets&lt;/h4>
&lt;p>The easiest type of secrets to manage are the ones that ephemeral; context-bound and time-bound.
However, they are not supported by all use-cases. Whenever they are, prioritise their use over
static or long-lived secrets.&lt;/p>
&lt;p>An example of an ephemeral secret that is time-bound, is a token provided by cloud providers to
any application running within a given Cloud Machine. Those tokens are generated automatically,
and have a short expiration time. In some cases you can even tie them to a network boundary,
meaning that even if they get breached, they won&amp;rsquo;t be able to be used outside the current
context.&lt;/p>
&lt;p>Flux supports
&lt;a href="https://deploy-preview-2413--fluxcd.netlify.app/flux/security/contextual-authorization/">contextual authorization&lt;/a> for the major Cloud Providers, be aware of the supported
features and use them whenever possible.&lt;/p>
&lt;h4 id="detect-chicken-and-egg-scenarios">Detect &amp;ldquo;chicken and egg&amp;rdquo; scenarios&lt;/h4>
&lt;p>Flux won&amp;rsquo;t protect you from yourself. On a running cluster, it is quite easy to incrementally fall
into the trap of building a non-provisionable cluster. For example, if your first Kustomization
depends on a CustomResourceType (CRD) to deploy a secret, which is only deployed as part of another
Kustomization, Flux may not be able to redeploy your sources from scratch on a new cluster.&lt;/p>
&lt;p>Make sure that your pipeline identifies and tests such scenarios. Automate the provisioning of clusters
that can test the entire E2E of your deployment process, and ensure that it is executed regularly.&lt;/p>
&lt;h3 id="summary">Summary&lt;/h3>
&lt;p>Flux supports a wide range of Secret Management solutions. And it is up to its users
to define what works best for their use case. This subject isn&amp;rsquo;t easy, and due diligence
is important to ensure the appropriate level of security controls are in place.&lt;/p>
&lt;p>Overall, none of the approaches covered above are inherently secure or insecure, but they
are rather part of a big picture in which what matters the most is the weakest link
and how it all hangs together. As with all things around security, a layered approach is
recommended.&lt;/p>
&lt;p>Take into account your threat model, availability and resilience requirements
when deciding what works best for you, and rest assured that a combination of some of
the above will make more sense, especially when disaster recovery and break the glass
scenarios are considered.&lt;/p></description></item><item><title>Flux: SLSA Assessment</title><link>https://deploy-preview-2413--fluxcd.netlify.app/flux/security/slsa-assessment/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://deploy-preview-2413--fluxcd.netlify.app/flux/security/slsa-assessment/</guid><description>
&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>Supply Chain Levels for Software Artifacts, or SLSA (pronounced &amp;ldquo;salsa&amp;rdquo;),
is a security framework which aims to prevent tampering and secure artifacts in a project.
SLSA is designed to support automation that tracks code handling from source to binary
protecting against tampering regardless of the complexity of the software supply chain.&lt;/p>
&lt;p>Starting with Flux version 2.0.0, the build, release and provenance portions of the Flux
project supply chain provisionally meet
&lt;a href="https://slsa.dev/spec/v1.0/levels" target="_blank">SLSA Build Level 3&lt;/a>.&lt;/p>
&lt;h2 id="slsa-requirements-and-flux-compliance-state">SLSA Requirements and Flux Compliance State&lt;/h2>
&lt;p>What follows is an assessment made by members of the Flux core maintainers team
on how Flux v2.0 complies with the Build Level 3 requirements as specified by
&lt;a href="https://slsa.dev/spec/v1.0/levels" target="_blank">SLSA v1.0&lt;/a>.&lt;/p>
&lt;h3 id="producer-requirements">Producer Requirements&lt;/h3>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Requirement&lt;/th>
&lt;th>Required at SLSA L3&lt;/th>
&lt;th>Met by Flux&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>Choose an appropriate build platform&lt;/td>
&lt;td>Yes&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Follow a consistent build process&lt;/td>
&lt;td>Yes&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Distribute provenance&lt;/td>
&lt;td>Yes&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h4 id="choose-an-appropriate-build-platform">Choose an appropriate build platform&lt;/h4>
&lt;blockquote>
&lt;p>The producer MUST choose a builder capable of producing Build Level 3 provenance.&lt;/p>
&lt;/blockquote>
&lt;ul>
&lt;li>The Flux project uses Git for source code management and the Flux project&amp;rsquo;s repositories are hosted on GitHub under
the FluxCD organization.&lt;/li>
&lt;li>All the Flux maintainers are required to have two-factor authentication enabled and to sign-off all their
contributions.&lt;/li>
&lt;li>The Flux project uses GitHub Actions and GitHub Runners for building all its release artifacts.&lt;/li>
&lt;li>The build and release process runs in isolation on an ephemeral environment provided by GitHub-hosted runners.&lt;/li>
&lt;/ul>
&lt;h4 id="follow-a-consistent-build-process">Follow a consistent build process&lt;/h4>
&lt;blockquote>
&lt;p>The producer MUST build their artifact in a consistent manner such that verifiers can form expectations about the
build process.&lt;/p>
&lt;/blockquote>
&lt;ul>
&lt;li>The build and release process is defined in code (GitHub Workflows and Makefiles) and is kept under version control.&lt;/li>
&lt;li>The GitHub Workflows make use of GitHub Actions pinned to their Git commit SHA and are kept up-to-date using GitHub
Dependabot.&lt;/li>
&lt;li>All changes to build and release process are done via Pull Requests that must be approved by at least one Flux
maintainer.&lt;/li>
&lt;li>The release process can only be kicked off by a Flux maintainer by pushing a Git tag in the semver format.&lt;/li>
&lt;/ul>
&lt;h4 id="distribute-provenance">Distribute provenance&lt;/h4>
&lt;blockquote>
&lt;p>The producer MUST distribute provenance to artifact consumers.&lt;/p>
&lt;/blockquote>
&lt;ul>
&lt;li>The Flux project uses the
official
&lt;a href="https://github.com/slsa-framework/slsa-github-generator" target="_blank">SLSA GitHub Generator project&lt;/a> for provenance
generation and distribution.&lt;/li>
&lt;li>The provenance for the release artifacts published to GitHub releases (binaries, SBOMs, deploy manifests, source code)
is generated using the &lt;code>generator_generic_slsa3&lt;/code> GitHub Workflow provided by
the
&lt;a href="https://github.com/slsa-framework/slsa-github-generator" target="_blank">SLSA GitHub Generator project&lt;/a>.&lt;/li>
&lt;li>The provenance for the release artifacts published to GitHub Container Registry and to DockerHub (Flux controllers
multi-arch container images) is generated using the &lt;code>generator_container_slsa3&lt;/code> GitHub Workflow provided by
the
&lt;a href="https://github.com/slsa-framework/slsa-github-generator" target="_blank">SLSA GitHub Generator project&lt;/a>.&lt;/li>
&lt;/ul>
&lt;h3 id="build-platform-requirements">Build Platform Requirements&lt;/h3>
&lt;h4 id="provenance-generation">Provenance generation&lt;/h4>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Requirement&lt;/th>
&lt;th>Required at SLSA L3&lt;/th>
&lt;th>Met by Flux&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>Provenance Exists&lt;/td>
&lt;td>Yes&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Provenance is Authentic&lt;/td>
&lt;td>Yes&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Provenance is Unforgeable&lt;/td>
&lt;td>Yes&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;blockquote>
&lt;p>The build process MUST generate provenance that unambiguously identifies the output package by cryptographic digest
and describes how that package was produced.&lt;/p>
&lt;/blockquote>
&lt;ul>
&lt;li>The Flux project release workflows make use of the
official
&lt;a href="https://github.com/slsa-framework/slsa-github-generator" target="_blank">SLSA GitHub Generator project&lt;/a> for provenance
generation.&lt;/li>
&lt;li>The provenance file stores the SHA-256 hashes of the release artifacts (binaries, SBOMs, deploy manifests, source
code).&lt;/li>
&lt;li>The provenance identifies the Flux container images using their digest in SHA-256 format.&lt;/li>
&lt;/ul>
&lt;blockquote>
&lt;p>Consumers MUST be able to validate the authenticity of the provenance attestation in order to ensure integrity and
define trust.&lt;/p>
&lt;/blockquote>
&lt;ul>
&lt;li>The provenance is signed by Sigstore Cosign using the GitHub OIDC identity and the public key to verify the provenance
is stored in the public
&lt;a href="https://docs.sigstore.dev/rekor/overview/" target="_blank">Rekor transparency log&lt;/a>.&lt;/li>
&lt;li>The release process and the provenance generation are run in isolation on an ephemeral environment provided by
GitHub-hosted runners.&lt;/li>
&lt;li>The provenance of the Flux release artifacts (binaries, container images, SBOMs, deploy manifests) can be verified
using the official
&lt;a href="https://github.com/slsa-framework/slsa-verifier" target="_blank">SLSA verifier tool&lt;/a>.&lt;/li>
&lt;/ul>
&lt;blockquote>
&lt;p>Provenance MUST be strongly resistant to forgery by tenants.&lt;/p>
&lt;/blockquote>
&lt;ul>
&lt;li>The provenance generation workflows run on ephemeral and isolated virtual machines which are fully managed by GitHub.&lt;/li>
&lt;li>The provenance signing secrets are ephemeral and are generated through
Sigstore&amp;rsquo;s
&lt;a href="https://github.com/sigstore/cosign/blob/main/KEYLESS.md" target="_blank">keyless signing&lt;/a> procedure.&lt;/li>
&lt;li>The
&lt;a href="https://github.com/slsa-framework/slsa-github-generator" target="_blank">SLSA GitHub generator&lt;/a> runs on separate virtual machines
than the build and release process, so that the Flux build scripts don&amp;rsquo;t have access to the signing secrets.&lt;/li>
&lt;/ul>
&lt;h4 id="isolation-strength">Isolation strength&lt;/h4>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Requirement&lt;/th>
&lt;th>Required at SLSA L3&lt;/th>
&lt;th>Met by Flux&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>Hosted&lt;/td>
&lt;td>Yes&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Isolated&lt;/td>
&lt;td>Yes&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;blockquote>
&lt;p>All build steps ran using a hosted build platform on shared or dedicated infrastructure.&lt;/p>
&lt;/blockquote>
&lt;ul>
&lt;li>The release process and the provenance generation are run in isolation on an ephemeral environment provided by
GitHub-hosted runners.&lt;/li>
&lt;li>The provenance generation is decoupled from the build process,
the
&lt;a href="https://github.com/slsa-framework/slsa-github-generator" target="_blank">SLSA GitHub generator&lt;/a> runs on separate virtual machines
fully managed by GitHub.&lt;/li>
&lt;/ul>
&lt;blockquote>
&lt;p>The build platform ensured that the build steps ran in an isolated environment, free of unintended external influence.&lt;/p>
&lt;/blockquote>
&lt;ul>
&lt;li>The release process can only be kicked off by a Flux maintainer by pushing a Git tag in the semver format.&lt;/li>
&lt;li>The release process runs on ephemeral and isolated virtual machines which are fully managed by GitHub.&lt;/li>
&lt;li>The release process can&amp;rsquo;t access the provenance signing key, because the provenance generator runs in isolation on
separate GitHub-hosted runners.&lt;/li>
&lt;/ul>
&lt;h2 id="provenance-verification">Provenance verification&lt;/h2>
&lt;p>The provenance of the Flux release artifacts (binaries, container images, SBOMs, deploy manifests)
can be verified using the official
&lt;a href="https://github.com/slsa-framework/slsa-verifier" target="_blank">SLSA verifier tool&lt;/a>.&lt;/p>
&lt;h3 id="container-images">Container images&lt;/h3>
&lt;p>The provenance of the Flux multi-arch container images hosted on GitHub Container Registry
and DockerHub can be verified using the official
&lt;a href="https://github.com/slsa-framework/slsa-verifier" target="_blank">SLSA verifier tool&lt;/a>
and
&lt;a href="https://github.com/sigstore/cosign" target="_blank">Sigstore Cosign&lt;/a>.&lt;/p>
&lt;p>What follows is the list of Flux components along with their minimum required version for provenance verification.&lt;/p>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Git Repository&lt;/th>
&lt;th>Images&lt;/th>
&lt;th>Min version&lt;/th>
&lt;th>Provenance (SLSA L3)&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>
&lt;a href="https://github.com/fluxcd/flux2" target="_blank">flux2&lt;/a>&lt;/td>
&lt;td>&lt;code>docker.io/fluxcd/flux-cli&lt;/code>&lt;br/>&lt;code>ghcr.io/fluxcd/flux-cli&lt;/code>&lt;/td>
&lt;td>&lt;code>v2.0.1&lt;/code>&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>
&lt;a href="https://github.com/fluxcd/source-controller" target="_blank">source-controller&lt;/a>&lt;/td>
&lt;td>&lt;code>docker.io/fluxcd/source-contoller&lt;/code>&lt;br/>&lt;code>ghcr.io/fluxcd/source-contoller&lt;/code>&lt;/td>
&lt;td>&lt;code>v1.0.0&lt;/code>&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>
&lt;a href="https://github.com/fluxcd/kustomize-controller" target="_blank">kustomize-controller&lt;/a>&lt;/td>
&lt;td>&lt;code>docker.io/fluxcd/kustomize-contoller&lt;/code>&lt;br/>&lt;code>ghcr.io/fluxcd/kustomize-contoller&lt;/code>&lt;/td>
&lt;td>&lt;code>v1.0.0&lt;/code>&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>
&lt;a href="https://github.com/fluxcd/notification-controller" target="_blank">notification-controller&lt;/a>&lt;/td>
&lt;td>&lt;code>docker.io/fluxcd/notification-contoller&lt;/code>&lt;br/>&lt;code>ghcr.io/fluxcd/notification-contoller&lt;/code>&lt;/td>
&lt;td>&lt;code>v1.0.0&lt;/code>&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>
&lt;a href="https://github.com/fluxcd/helm-controller" target="_blank">helm-controller&lt;/a>&lt;/td>
&lt;td>&lt;code>docker.io/fluxcd/helm-contoller&lt;/code> &lt;br/>&lt;code>ghcr.io/fluxcd/helm-contoller&lt;/code>&lt;/td>
&lt;td>&lt;code>v0.35.0&lt;/code>&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>
&lt;a href="https://github.com/fluxcd/image-reflector-controller" target="_blank">image-reflector-controller&lt;/a>&lt;/td>
&lt;td>&lt;code>docker.io/fluxcd/image-reflector-contoller&lt;/code> &lt;br/>&lt;code>ghcr.io/fluxcd/image-reflector-contoller&lt;/code>&lt;/td>
&lt;td>&lt;code>v0.29.0&lt;/code>&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>
&lt;a href="https://github.com/fluxcd/image-automation-controller" target="_blank">image-automation-controller&lt;/a>&lt;/td>
&lt;td>&lt;code>docker.io/fluxcd/image-automation-contoller&lt;/code>&lt;br/>&lt;code>ghcr.io/fluxcd/image-automation-contoller&lt;/code>&lt;/td>
&lt;td>&lt;code>v0.35.0&lt;/code>&lt;/td>
&lt;td>Yes&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h3 id="example">Example&lt;/h3>
&lt;p>We will be using the
&lt;a href="https://github.com/fluxcd/source-controller" target="_blank">source-controller&lt;/a> container
image hosted on GHCR for this example, but these instructions can be used for all Flux container images.&lt;/p>
&lt;p>First, collect the digest of the image to verify:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-console" data-lang="console">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#c65d09;font-weight:bold">$&lt;/span> crane digest ghcr.io/fluxcd/source-controller:v1.0.0
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">sha256:8dfd386a338eab2fde70cd7609e3b35a6e2f30283ecf2366da53013295fa65f3
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Using the digest, verify the provenance of the Flux controller by specifying the repository and version:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-console" data-lang="console">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#c65d09;font-weight:bold">$&lt;/span> slsa-verifier verify-image ghcr.io/fluxcd/source-controller:v1.0.0@sha256:8dfd386a338eab2fde70cd7609e3b35a6e2f30283ecf2366da53013295fa65f3 --source-uri github.com/fluxcd/source-controller --source-tag v1.0.0
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">Verified build using builder https://github.comslsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@refs/tags/v1.7.0 at commit a40e0da705f26710077a7591f9dad05b7cd55acd
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">PASSED: Verified SLSA provenance
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Using Cosign, verify the SLSA provenance attestation by specifying the workflow and GitHub OIDC issuer:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-console" data-lang="console">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#c65d09;font-weight:bold">$&lt;/span> cosign verify-attestation --type slsaprovenance --certificate-identity-regexp https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@refs/tags/v --certificate-oidc-issuer https://token.actions.githubusercontent.com ghcr.io/fluxcd/source-controller:v1.0.0
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">Verification for ghcr.io/fluxcd/source-controller:v1.0.0 --
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">The following checks were performed on each of these signatures:
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888"> - The cosign claims were validated
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888"> - Existence of the claims in the transparency log was verified offline
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888"> - The code-signing certificate was verified using trusted certificate authority certificates
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">Certificate subject: https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@refs/tags/v1.7.0
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">Certificate issuer URL: https://token.actions.githubusercontent.com
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">GitHub Workflow Trigger: push
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">GitHub Workflow SHA: a40e0da705f26710077a7591f9dad05b7cd55acd
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">GitHub Workflow Name: release
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">GitHub Workflow Repository: fluxcd/source-controller
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">GitHub Workflow Ref: refs/tags/v1.0.0
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h3 id="flux-artifacts">Flux artifacts&lt;/h3>
&lt;p>The provenance of the Flux release artifacts (binaries, container images, SBOMs, deploy manifests) published on
GitHub can be verified using the official
&lt;a href="https://github.com/slsa-framework/slsa-verifier" target="_blank">SLSA verifier tool&lt;/a>.&lt;/p>
&lt;h3 id="example-1">Example&lt;/h3>
&lt;p>In this example we use the Flux SBOM file,
but the instructions can be used for all artifacts included with the Flux release.&lt;/p>
&lt;p>First, download the release artifacts from GitHub:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-shell" data-lang="shell">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#bb60d5">FLUX_VER&lt;/span>&lt;span style="color:#666">=&lt;/span>2.0.1 &lt;span style="color:#666">&amp;amp;&amp;amp;&lt;/span> &lt;span style="color:#4070a0;font-weight:bold">\
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#4070a0;font-weight:bold">&lt;/span>gh release download v&lt;span style="color:#70a0d0">${&lt;/span>&lt;span style="color:#bb60d5">FLUX_VER&lt;/span>&lt;span style="color:#70a0d0">}&lt;/span> -R&lt;span style="color:#666">=&lt;/span>fluxcd/flux2 -p&lt;span style="color:#666">=&lt;/span>&lt;span style="color:#4070a0">&amp;#34;*&amp;#34;&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Using the &lt;code>provenance.intoto.jsonl&lt;/code> file,
verify the provenance attestation of the Flux SBOM (&lt;code>flux_&amp;lt;version&amp;gt;_sbom.spdx.json&lt;/code>):&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" style="background-color:#f0f0f0;-moz-tab-size:4;-o-tab-size:4;tab-size:4;">&lt;code class="language-console" data-lang="console">&lt;span style="display:flex;">&lt;span>&lt;span style="color:#c65d09;font-weight:bold">$&lt;/span> slsa-verifier verify-artifact --provenance-path provenance.intoto.jsonl --source-uri github.com/fluxcd/flux2 --source-tag v&lt;span style="color:#70a0d0">${&lt;/span>&lt;span style="color:#bb60d5">FLUX_VER&lt;/span>&lt;span style="color:#70a0d0">}&lt;/span> flux_&lt;span style="color:#70a0d0">${&lt;/span>&lt;span style="color:#bb60d5">FLUX_VER&lt;/span>&lt;span style="color:#70a0d0">}&lt;/span>_sbom.spdx.json
&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">Verified signature against tlog entry index 27066821 at URL: https://rekor.sigstore.dev/api/v1/log/entries/24296fb24b8ad77ac2d2dc6381ec7f1f04d991344771214c5fb5861621dbd9da6f0551f806cbf609
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">Verified build using builder https://github.comslsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@refs/tags/v1.7.0 at commit 9b3162495ce1b99b1fcdf137c553f543eafe3ec7
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">Verifying artifact flux_2.0.1_sbom.spdx.json: PASSED
&lt;/span>&lt;/span>&lt;/span>&lt;span style="display:flex;">&lt;span>&lt;span style="color:#888">PASSED: Verified SLSA provenance
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div></description></item></channel></rss>