Skip to main content

CKS Guide

The 4C's of cloud-native computing

The 4C's of cloud-native computing represents security in depth where each "C" stands for level of isolation from outside in.

The 4 C's

CloudSecurity of entire infrastructure hosting the servers. Public/Private etc.
ClusterKubernetes cluster
ContainerDocker containers. Running, for example in privilege mode.
CodeBinaries, source code, code configuration, no TLS, variables in code, etc.

Admission controllers

Image policy webhook

Admission configuration

kind: AdmissionConfiguration
- name: ImagePolicyWebhook
kubeConfigFile: <path-to-kubeconfig-file>
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: true

[!NOTE] defaultAllow: true if admission webhook server is not reachable, all request will be allowed

Enable admission controller

If Kubernetes components are deployed as daemons, edit service configuration file by systemctl edit service_name, else if Kubernetes has been deployed using kubeadm, simply edit pod manifest vim /etc/kubernetes/manifests/kube-apiserver.yaml and add ImagePolicyWebhook to --enable-admission-plugins= section as well as pass admission control config file via --admission-control-config-file=


[!TIP] to switch off auto-mounting secrets on the pod, use automountServiceAccountToken: false

Pod Decision Tree

POD Decision Tree


Seccomp: Secure Computing

How syscalls work


How to check if Seccomp is enabled

grep -i seccomp /boot/config-$(uname -r)

Check seccomp status on the process

# 1. ssh into the container
# 2. list processes
ps -ef

# 3. grep for seccomp status
grep -i seccomp /proc/{PID}/status

If the result is 2 meaning that seccomp is enabled for the container

Seccomp modes

Mode 0Disabled
Mode 1Strict - will block all calls except read, write, exec, sigreadon
Mode 2Filtered - filter selectively

Seccomp filter json file


there are 2 profile types:

  • whitelist: only specified syscalls are allowed, all others are rejected
  • blacklist: all syscalls are allowed unless specified in the file

Docker seccomp filter

By default, Docker enables seccomp filter (mode 2).

It blocks around 60 of the around 300 syscalls available with default profile

[!TIP] How to check what syscalls are blocked? Run amicontained tool as container to see syscalls blocked by default docker profile

docker run amicontained

Run amicontained tool as pod to see syscalls blocked by Kubernetes default profile

k run amicontained --image amicontained -- amidontained

check pod logs

k logs amicontained

Enable seccomp in Kubernetes

Create a pod using yaml spec and enable RuntimeDefault profile under securityContext of pod

type: RuntimeDefault

Custom seccomp profile in Kubenetes

[!ATTENTION] default seccomp profile is located at /var/lib/kubelet/seccomp. Custom seccomp profile path must be relative to this path

apiVersion: v1
kind: Pod
name: audit-pod
app: audit-pod
type: Localhost
localhostProfile: profiles/audit.json
- name: test-container
image: hashicorp/http-echo:0.2.3
- "-text=just made some syscalls!"
allowPrivilegeEscalation: false

[!NOTE] In order to apply new seccomp profile, pod must be deleted and re-created. use k recreate -f command

Seccomp logs

By default seccomp logs will be saved in /var/log/syslog

You can easily tail logs for specific pod by tail -f /var/log/syslog | grep {pod_name}


AppArmor is a Linux security module
  • restrict access to specific objects in the system
  • determines what resources can be used by an application
  • more fine grained control than seccomp
  • installed in most systems
  • AppArmor profiles are stored under /etc/apparmor.d/

Example AppArmor Profile

#include <tunables/global>

profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
#include <abstractions/base>


# Deny all file writes.
deny /** w,

Check if AppArmor is running

  • systemctl status apparmor
  • is AppArmor module enabled? cat /sys/module/apparmor/parameters/enabled
  • is AppArmor profile loaded into kernel? cat /sys/kernel/security/apparmor/profiles
  • use aa-status to check what profiles are loaded

AppArmor profiles load modes

enforceenforce and monitor on any app that fits the profile
complainlog as events
unconfinedany task allowed, no logging

AppArmor in Kubernetes

  • support added in v 1.4, but still in beta
  • to load profile from default location use apparmor_parser -q /etc/apparmor.d/{profile_name}

[!TIP] to secure a pod an annotation in this format<container_name>: localhost/profile_name OR runtime/default OR unconfined

Use Case

AppArmor can be used to for example restrict access to a folder inside pod/container.

Linux Capabilities

  • List of Linux Capabilities

  • Capabilities are added and removed per container

    add: ["CAP1"]
    drop: ["CAP2"]

[!TIP] To check what capabilities are needed for any give command run getcap /<path>/<command> or to check capabililties used by a running process run getpcaps PID

When to choose which

When should which tool be selected? Here is list of use cases and corresponding tools.

Reduce risk of exploiting kernel vulnerabilitySeccomp
Prevent app/container from accessing unwanted resources (files, directories, etc)AppArmor
Reduce the risk of what compromised process can do to a system (coarse-grained)Linux Capabilities

Containers Isolation

Container Isolation


gVisor is an application kernel for containers that provides efficient defense-in-depth anywhere.

[!NOTE] Install gVisor

Container Isolation

Kata Containers

Kata Containers Kata Containers is an open source container runtime, building lightweight virtual machines that seamlessly plug into the containers ecosystem.

Container Isolation

[!NOTE] this requires nested virtualization (in case of running workloads on VMs) and can degrade performance. Some cloud providers do not support nested virtualization.

Containers isolation in Kubernetes

  • run a container with kata container runtime: docker run --runtime kata -d nginx
  • run a container with gVisor runtime: docker run --runtime runsc -d nginx
  1. Create runtime object
  2. use runtimeClassName on pod definition level to use the runtime


Project created by Sysdig and donated to CNCF.

Secure and monitor linux system using eBPF probes.

Main usecases

  • runtime observability and security
  • rules engine for filtering
  • notifications and alerting (remedy is possible with additional tools)

Falco components

High-level overview of falco components:

Falco Components


Falco rules & alerts

Falco comes with pre-defined set of rules and alerts/actions that can be triggered by those rules (bolded ones are more relevant to containerized workloads):

Falco Ruleset


Falco configuration

  • configuration is stored in /etc/falco/falco.yaml
  • default rule set is stored in falco_rules.yaml
  • file to override rules is falco_fules.local.yaml

Using Falco

Start Falco as a service

systemctrl start falco

Check Falco logs

journalctl -fu falco