Configure And Implement A Kubernetes Audits Webhook

Estimated reading time: 9 Minutes

This post is going to be mostly technical. By the end of it you should know exactly how to implement your own Kubernetes audits webhook.

First off, if you need some background on Kubernetes auditing, I recommend reading my Kubernetes audits introduction post first.

The reason for which I’m writing this post is because of the removal of the AuditSink object. Currently we’re left with just two options for audits backends – either write audits to a file or to a webhook backend. Since I couldn’t find any relevant and complete guides on the subject I decided to write one myself.

Before starting: all of the code and configuration files are available here: https://github.com/omri86/k8s-audit-webhook

Ingredients

  1. A local Kubernetes cluster. I have installed a kubeadm cluster locally on my machine. Please note that this post is local cluster oriented (so called “on-prem”). I won’t go into details on how to setup a cluster with kubeadm, but you can find more details here.
  2. Golang and your favorite Go editor. Make sure you have Go installed.

Policy File

As explained in the intro post, the auditing policy describes what gets audited. You’ll use a simple policy to avoid complications. Feel free to change it later on.

Save the following snippet to a file named audit-policy.yaml:

apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
  - "RequestReceived"
rules:
  # Log pod changes at RequestResponse level
  - level: RequestResponse
    resources:
    - group: ""
      # Resource "pods" doesn't match requests to any subresource of pods,
      # which is consistent with the RBAC policy.
      resources: ["pods"]
  # Log "pods/log", "pods/status" at Metadata level
  - level: Metadata
    resources:
    - group: ""
      resources: ["pods/log", "pods/status"]

  # Don't log requests to a configmap called "controller-leader"
  - level: None
    resources:
    - group: ""
      resources: ["configmaps"]
      resourceNames: ["controller-leader"]

  # Don't log watch requests by the "system:kube-proxy" on endpoints or services
  - level: None
    users: ["system:kube-proxy"]
    verbs: ["watch"]
    resources:
    - group: "" # core API group
      resources: ["endpoints", "services"]

  # Don't log authenticated requests to certain non-resource URL paths.
  - level: None
    userGroups: ["system:authenticated"]
    nonResourceURLs:
    - "/api*" # Wildcard matching.
    - "/version"

  # Log the request body of configmap changes in kube-system.
  - level: Request
    resources:
    - group: "" # core API group
      resources: ["configmaps"]
    # This rule only applies to resources in the "kube-system" namespace.
    # The empty string "" can be used to select non-namespaced resources.
    namespaces: ["kube-system"]

  # Log configmap and secret changes in all other namespaces at the Metadata level.
  - level: Metadata
    resources:
    - group: "" # core API group
      resources: ["secrets", "configmaps"]

  # Log all other resources in core and extensions at the Request level.
  - level: Request
    resources:
    - group: "" # core API group
    - group: "extensions" # Version of group should NOT be included.

  # A catch-all rule to log all other requests at the Metadata level.
  - level: Metadata
    # Long-running requests like watches that fall under this rule will not
    # generate an audit event in RequestReceived.
    omitStages:
      - "RequestReceived"

You’ll use this file in a minute. Let’s finish preparing the backend file as well and apply both of them together.

Webhook Backend Configuration File

The webhook configuration structure is a kubeconfig yaml file.

Save the following snippet to a file called audit-webhook.yaml:

apiVersion: v1
kind: Config
preferences: {}
clusters:
- name: example-cluster
  cluster:
    server: http://127.0.0.1:8080
users:
- name: example-user
  user:
    username: some-user
    password: some-password
contexts:
- name: example-context
  context:
    cluster: example-cluster
    user: example-user
current-context: example-context

The reason for which we chose this address and port will be clear when you’ll implement the webhook server.

Configuring The Cluster

Now that you have both of the files ready, you need to apply them as flags to the API server. The official documentation explains exactly how to do this:

  1. Policy file: audit-policy.yaml should be applied with the --audit-policy-file flag.
  2. Audit webhook config file: audit-webhook.yaml should be applied with the --audit-webhook-config-file flag.

Copy both of these files to this path: /etc/kubernetes/pki. You’ll understand why in a minute.

The way to apply flags to the API server when using kubeadm is by modifying the API server manifest file.

The manifest file contains all of the API server configurations. It is located at: /etc/kubernetes/manifests/kube-apiserver.yaml

Open the file in a text editor. As you can see in the lower part of the manifest file, the API server pod definition contains a host volume definition:

    volumeMounts:
...
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true

And also the host path definition:

  volumes:
...
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs

That explains why you copied both files to this location. It’s not best practice. This directory is intended to contain the cluster certificates but for our introduction on a test cluster this just simplifies the procedure.

Applying The Flags

Next we want to add our flags to the API server manifest. As you can see the beginning of the file contains the API server flags. All you need to do is just add the flags mentioned above to this list (see lines #10-11):

spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=10.0.2.15
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
...
    - --audit-policy-file=/etc/kubernetes/pki/audit-policy.yaml
    - --audit-webhook-config-file=/etc/kubernetes/pki/audit-webhook.yaml

Upon saving the mainfest file, the API server will restart automatically. You can verify this by listing the containers on your machine. I’m using docker as my container runtime, so a simple docker ps command should let me know that everything is OK:

CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS       
d47a3a806755        92d040a0dca7           "kube-apiserver --ad…"   15 seconds ago      Up 14 seconds
14f04f0df665        k8s.gcr.io/pause:3.2   "/pause"                 15 seconds ago      Up 14 seconds

Note that the CREATED timestamp is a few seconds ago.

If the API server did not start, I suggest examining the container logs. These are located under: /var/log/containers. Search for a file with a prefix of kube-apiserver. For more troubleshooting help feel free to comment on this post.

Note that there are more configuration flags that help you fine-tune your webhook backend, you can read more about them here.

Implementing The Webhook Backend

The webhook backend is a simple server which accepts POST requests on one of it’s routes. This can be easily done in Go:

http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
	defer r.Body.Close()
	body, err := ioutil.ReadAll(r.Body)
	if err != nil {
		http.Error(w, "can't read body", http.StatusBadRequest)
		return
	}
	fmt.Printf("%s\n", string(body))
})
fmt.Printf("Server listening on port 8080\n")
panic(http.ListenAndServe(":8080", nil))

The code above simply starts a server that listens on port 8080.

The HTTP handler, which is set to the root "/" path, is printing out every request body received.

Run the server and wait for a few seconds. Check out your server output, it should contain output that looks similar to this:

{"kind":"EventList","apiVersion":"audit.k8s.io/v1","metadata":{},"items":[{"level":"Metadata","auditID":"88e4e82a-b267-450a-8c49-85d723daa553","stage":"ResponseComplete","requestURI":"/apis/coordination.k8s.io/v1beta1?timeout=32s","verb":"get","user":{"username":"system:serviceaccount:kube-system:resourcequota-controller","uid":"f58afc33-1a60-47ac-bdf1-9a8d05d2f1c0","groups":["system:serviceaccounts","system:serviceaccounts:kube-system","system:authenticated"]},"sourceIPs":["10.0.2.15"],"userAgent":"kube-controller-manager/v1.18.8 (linux/amd64) kubernetes/9f2892a/system:serviceaccount:kube-system:resourcequota-controller","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2020-08-29T19:36:40.144608Z","stageTimestamp":"2020-08-29T19:36:40.144728Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:discovery\" of ClusterRole \"system:discovery\" to Group \"system:authenticated\""}}]}

Formatting the JSON object received:

{
   "kind":"EventList",
   "apiVersion":"audit.k8s.io/v1",
   "metadata":{},
   "items":[
      {
         "level":"Metadata",
         "auditID":"88e4e82a-b267-450a-8c49-85d723daa553",
         "stage":"ResponseComplete",
         "requestURI":"/apis/coordination.k8s.io/v1beta1?timeout=32s",
         "verb":"get",
         "user":{
            "username":"system:serviceaccount:kube-system:resourcequota-controller",
            "uid":"f58afc33-1a60-47ac-bdf1-9a8d05d2f1c0",
            "groups":[
               "system:serviceaccounts",
               "system:serviceaccounts:kube-system",
               "system:authenticated"
            ]
         },
         "sourceIPs":["10.0.2.15"],
         "userAgent":"kube-controller-manager/v1.18.8 (linux/amd64) kubernetes/9f2892a/system:serviceaccount:kube-system:resourcequota-controller",
         "responseStatus":{
            "metadata":{},
            "code":200
         },
         "requestReceivedTimestamp":"2020-08-29T19:36:40.144608Z",
         "stageTimestamp":"2020-08-29T19:36:40.144728Z",
         "annotations":{
            "authorization.k8s.io/decision":"allow",
            "authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:discovery\" of ClusterRole \"system:discovery\" to Group \"system:authenticated\""
         }
      }
   ]
}

I’m guessing that this looks familiar from the Kubernetes auditing introduction post. Yes, this is a Kubernetes audit. You can see that the kind of the object we print out is an EventList.

The full event struct fields are available at the Kubernetes V1 API github section.

Upgrading the webhook server

Now that we have audits flowing to our backend, let’s see how we can do cooler things with our server.

First, you’ll want to be able to reach specific fields under the audit.

The way to do this is by unmarshaling the request body JSON objects to the V1 event structure.

Let’s start with getting and importing the Kubernetes package to our project:

$ go get k8s.io/apiserver/pkg/apis/audit/v1

Import this package to your project by adding the same path to the imports list, and reference it like so:

var events v1.EventList
err = json.Unmarshal(body, &events)
if err != nil {
	http.Error(w, "failed to unmarshal audit events", http.StatusBadRequest)
	return
}

You can now try and filter out events.

For example, try to filter events that were triggered by pod creation:

// isPodCreation returns true if the given event is of a pod creation 
func isPodCreation(event v1.Event) bool {
	return event.Verb == "create" &&
		event.Stage == v1.StageResponseComplete &&
		event.ObjectRef != nil &&
		event.ObjectRef.Resource == "pods"
}

  • Line #3 – validate that the actual verb is of an object creation
  • Line #4 – only audit events that were completed
  • Lines #5-6 – validate the created object is a pod

Now trigger a pod creation event just to see that your code works.

Moving forward, you can add more filters on different fields, as you see fit. Also, as mentioned above, the full server code is available here.

That’s all for this one. Follow me on Twitter for regular updates on new posts and other cloud native updates! As always, feel free to leave comments with questions and/or remarks. See you on the next one!

Leave a Reply