Beliebte Suchanfragen
//

Secure your Kubernetes workloads with OPA Gatekeeper

15.12.2022 | 8 minutes of reading time

Last month, Kubernetes 1.25 was released. And with that, the long-announced removal of PodSecurityPolicies (short: PSPs) finally becomes reality. Finally? Yes – as Tabitha Sable from the Kubernetes SIG Security Team said herself in the linked blog post – PSPs were both confusing and unhandy to use.

The way PSPs are applied to Pods has proven confusing to nearly everyone that has attempted to use them.
— Tabitha Sable

Spoiler: Although I’ll shed some light on PSPs in the first part of this post, and you may not have made use of them yet, I strongly encourage you to read on to learn about more powerful and easier-to-use substitutes for ensuring pod security.

The general idea behind PSPs & Co.

But, before we get into any details, let’s take one step back and understand how PSPs (and their potential replacements) contribute to securing your workloads.

(Pod) Security Context

When you define your application (as a Deployment, StatefulSet, Pod ... you name it), you can set a SecurityContext to either the whole Pod or on a per-container basis. Within the SecurityContext, you can configure a lot of fields, which then will be processed by kubelet and your container runtime as instructions on how to run your pod.

1securityContext:
2  runAsNonRoot: true
3  seccompProfile:
4    type: RuntimeDefault
5  privileged: false

For example, you could set runAsNonRoot: true and with that the runtime will check if any user with uid greater than 0 is set. To wrap up this (very) little sidenote: via the (Pod)SecurityContext object, you can actually set the conditions under which the pod has to run. Kubernetes will make sure that the pods fit to the configured settings and will fail their deployment if not.

Pod Security Policies

So, how do PSPs contribute to the security of your deployment? With PSPs you were able to define allowed values for the fields in the security context (and more, e.g. allowed volume types etc.). They act as an AdmissionController and reject the creation of the Pod, if it does not comply with the range of allowed values defined in the PSP.

1apiVersion: policy/v1beta1
2kind: PodSecurityPolicy
3metadata:
4  name: unprivileged-nonroot
5  labels:
6spec:
7  privileged: false
8  allowPrivilegeEscalation: false
9  requiredDropCapabilities:
10    - ALL
11  volumes:
12    - 'configMap'
13    - 'emptyDir'
14  readOnlyRootFilesystem: true
15  runAsUser:
16    rule: MustRunAsNonRoot

So far, PSPs do not look too bad, do they? Well... let’s get to the part that Tabitha Sable criticizes – how are PSPs applied to a Pod? Obviously, it is done via a ServiceAccount that is bound to a (Cluster)Role, which has the rights to use a certain PodSecurityPolicy. I do apologize for the sarcasm, but after the hard time I had understanding the concept and debugging PSPs / Pods, I felt like it was appropriate.

For example, the PodSecurityPolicy from above would be applied to a pod with these resources:

1apiVersion: v1
2kind: ServiceAccount
3metadata:
4  name: sa-with-psp
5
6apiVersion: rbac.authorization.k8s.io/v1
7kind: ClusterRole
8metadata:
9  name: unprivileged-nonroot
10rules:
11- apiGroups:
12  - policy
13  resources:
14  - podsecuritypolicies
15  resourceNames:
16  - unprivileged-nonroot
17  verbs:
18  - use
19
20apiVersion: rbac.authorization.k8s.io/v1
21kind: RoleBinding
22metadata:
23  name: 10-unpriviledged-clusterrole-assignment
24roleRef:
25  apiGroup: rbac.authorization.k8s.io
26  kind: ClusterRole
27  name: unprivileged-nonroot
28subjects:
29- kind: ServiceAccount
30  name: default
31  namespace: sa-with-psp
32
33apiVersion: v1
34kind: Pod
35metadata:
36  name: debug
37spec:
38  serviceAccountName: sa-with-psp
39  containers:
40  -    image: nginx/nginx
41    name: debug

Gatekeeper to the rescue

Enough of the old stuff – PSPs were deprecated a long time ago, never left the beta phase and their EOL is approaching in big steps. Let’s focus on the alternatives. In general, the approach of establishing some sort of admission controller works great for ensuring the usage of valid and pre-configured security measures.

K8s internal PSP replacement – Pod Security Standards

The first alternative I want to present is the Kubernetes “native” concept of Pod Security Admission. It provides the option to apply three provided Pod Security Standards: baseline, privileged and restricted. These will be applied per namespace and will enforce one of the three named isolation levels. Pod Security Standards and Admission provide a good and easier to use baseline to enforce pod security.

Level-up using a fully fledged policy engine

Taking one step further, to achieve a more powerful and in-depth controlled pod security, you will come across a policy engine like Open Policy Agent (short: OPA) or Kyverno rather soon. Generally, both are comparable. I will focus on presenting OPA (or rather Gatekeeper, but we’ll come to that in a second) in this post, as it was our go-to replacement for PSPs.

If you have never heard of the Open Policy Agent at all, I’d recommend reading my colleague Marco's blog post. There you will get a decent overview of what the Open Policy Agent is all about and get to know another use case.

This blog post will focus on Gatekeeper. It acts as (not only) an AdmissionController in the form of validating (and/or mutating) webhooks enforcing all kinds of policies using OPA as its underlying policy engine. With Gatekeeper in place, you are able to make sure that your resources comply with your policies, before they are stored in etcd (and with that applied to your Kubernetes cluster). To represent your policies in the cluster, the concept relies on CustomResourceDefinitions (short: CRDs) – Constraints and ConstraintTemplates, but more on that later on.

The following facts are important to understand (in order to really get the huge benefit of using a policy engine-based solution like OPA Gatekeeper):

  1. The check is done based on a so-called admission request, which contains – besides some metadata about the request – the actual manifest of the object in question in yaml or json format (the document).
  2. OPA is domain-agnostic. It can be used on arbitrarily structured data. To put it in very simplified terms, with a policy you define which values should be allowed at any given path in the document.

Coming back to the main idea behind this blog post: securing your workloads. With Gatekeeper in place, you can create and apply several constraints in your cluster that will enforce the same set of requirements as PSPs do. To get started without diving too deep into Rego (the language of OPA policies) you can use this library of constraints and templates. This library contains templates to cover everything you previously did (or did not) via PSPs. Additionally, you can find inspiration and templates for lots of other use cases of Gatekeeper: check for duplicate hosts in ingresses, enforce sets of labels, create an allowlist for image-registries and many more.

Apply Gatekeeper constraints in your cluster

As mentioned earlier in this blog post, Gatekeeper is based on CRDs. At first, you create a constraint template (kind: ConstraintTemplate in templates.gatekeeper.sh) that contains your policy-logic and possible inputs.

1apiVersion: templates.gatekeeper.sh/v1
2kind: ConstraintTemplate
3metadata:
4  name: k8spspreadonlyrootfilesystem
5  annotations:
6    metadata.gatekeeper.sh/title: "Read Only Root Filesystem"
7    description: >-
8      Requires the use of a read-only root file system by pod containers.
9      Corresponds to the `readOnlyRootFilesystem` field in a
10      PodSecurityPolicy. For more information, see
11      https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems
12spec:
13  crd:
14    spec:
15      names:
16        kind: K8sPSPReadOnlyRootFilesystem
17      validation:
18        openAPIV3Schema:
19          type: object
20          description: >-
21            Requires the use of a read-only root file system by pod containers.
22          properties:
23            exemptImages:
24              description: >-
25                Any container that uses an image that matches an entry in this list will be excluded
26                from enforcement. Prefix-matching can be signified with `*`. For example: `my-image-*`.
27              type: array
28              items:
29                type: string
30  targets:
31  - target: admission.k8s.gatekeeper.sh
32    rego: |
33      package k8spspreadonlyrootfilesystem
34        
35      violation[{"msg": msg, "details": {}}] {
36        c := input_containers[_]
37        input_read_only_root_fs(c)
38        msg := sprintf("only read-only root filesystem container is allowed: %v", [c.name])
39      }
40      input_read_only_root_fs(c) {
41        not has_field(c, "securityContext")
42      }
43      input_read_only_root_fs(c) {
44        not c.securityContext.readOnlyRootFilesystem == true
45      }
46        
47      input_containers[c] {
48        c := input.review.object.spec.containers[_]
49      }
50      input_containers[c] {
51        c := input.review.object.spec.initContainers[_]
52      }
53      input_containers[c] {
54        c := input.review.object.spec.ephemeralContainers[_]
55      }
56        
57      has_field(object, field) = true {
58        object[field]
59      }

To actually enforce the constraint, you need to instantiate the template. To do that, you create a constraint from the kind K8sPSPReadOnlyRootFilesystem (as defined in your template, l. 16) in the api-group constraints.gatekeeper.sh.

1apiVersion: constraints.gatekeeper.sh/v1beta1
2kind: K8sPSPReadOnlyRootFilesystem
3metadata:
4  name: psp-readonlyrootfilesystem
5spec:
6  match:
7    kinds:
8      - apiGroups: [""]
9        kinds: ["Pod"]
10    namespaces:
11      - “restricted”
12  parameters:
13    exemptImages:
14      - "my-image-*"

This constraint for example would require pods in the namespace "restricted" to run with a read-only root file system – if the configured image does not start with "my-image-". While the exemptImage limitation is a user-defined setting in the template (here you can see the full reference; the snippet above was redacted for better readability), the match is system-defined. You can limit the scope of your policies using certain matchers – based on name, kind, namespace or labels. How each matcher works is best explained in the official gatekeeper-documentation. Using these matchers, you can selectively apply your constraints to certain parts of your cluster. Most likely, you will exclude namespaces like kube-system, as pods in them often need more privileges to fulfill their purpose.

Conclusion

I hope this blog post encourages you to dig deeper into securing workloads in Kubernetes. Particularly concerning distributed and containerized systems like Kubernetes it is a very important topic, which is however often not considered early enough – or even not at all. Running Gatekeeper in a dry-run mode might be a good place to start, as it does not interrupt your service but gives you a first insight on what you could tackle in terms of k8s security. If you are interested, I will provide a follow-up post on how to visualize violations in your cluster easily.

One last thing to keep in mind – usage of Gatekeeper or Kyverno etc. does not automatically imply a secure workload. They only enforce the users of your cluster to make use of built-in security features as presented in this blog post. To achieve an in-depth runtime-security, you could check tools like falco.

share post

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.