This article is licensed under the Creative Commons Attribution-NoDerivatives-NonCommercial (CC BY-NC-ND). In short: share it, but don't sell it or remix it.
I neither encourage nor discourage the use of the technologies mentioned hereβI just happen to like them! π
Found a bug? Feel free to reach out to me on those typical social media platforms everyone knows. You know the ones.

To install a valid playground that will be useful, we can spin up three VMs running Ubuntu (20.04, 22.04 -this guide- or 24.04 LTS).
sudo hostnamectl set-hostname k8s-control for the control-plane and sudo hostnamectl set-hostname k8s-worker1 and worker2 for the second worker node.Repeat the following steps on all three nodes.
cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
> overlay
> br_netfilter
> EOF
sudo modprobe overlay
sudo modeprobe br_netfilter
cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> net.bridge.bridge.nf-call-ip6tables = 1
> EOF
sudo systctl --system # will reload the config and apply latest changes
sudo apt-get update && sudo apt-get install -y apt-transport-https curl gnupg lsb-release
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo mkdir -p /etc/containerd
sudo bash -c 'containerd config default > /etc/containerd/config.toml'
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl enable containerd
sudo systemctl restart containerd
sudo swapoff -a
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet=1.29.0-1.1 kubeadm=1.29.0-1.1 kubectl=1.29.0-1.1
sudo apt-mark hold kubelet kubeadm kubectl
sudo systemctl start kubelet
sudo systemctl enable kubelet
Go to the control plane and initialize the cluster:
sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.29.0
This will display messages similar to:
[init] Using Kubernetes version: v1.29.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0808 10:42:06.212344 5379 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-cp kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.199.128]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-cp localhost] and IPs [172.16.199.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-cp localhost] and IPs [172.16.199.128 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.503268 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-cp as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-cp as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: d37om1.etnpz6h8onp9nu3x
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.199.128:6443 --token d37om1.etnpz6h8onp9nu3x \
--discovery-token-ca-cert-hash sha256:XXX
Just execute the following to have a valid ~/.kube/config file with the right context:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Test the cluster by getting the nodes:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-cp NotReady control-plane 6m38s v1.29.0
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
kubeadm token create --print-join-command
root@k8s-w1:~# kubeadm join 172.16.199.130:6443 --token qy3tpt.fv63l09xkxiy7flr --discovery-token-ca-cert-hash sha256:c3247df1c256a652a06b1fdb188a175b71e7215f5eb1ecf2e1bf1dbc791a70bc
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-cp Ready control-plane 30m v1.29.0
k8s-w1 Ready <none> 6s v1.29.0
k8s-w2 Ready <none> 8m6s v1.29.0
Network Policies can be described as pod-level firewall rules.
They allow you to prevent or restrict the communication to and from Pods.
We will setup a policy that will block all traffic except the needed requests.
Let's start by creating a namespace to test: kubectl create ns netpol-test
Deploy some pods, example with Nginx:
kubectl -n netpoltest create deployment nginx --image=nginx --replicas=1
jbarrio@k8s-w1:~$ kubectl -n netpoltest get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 19s
jbarrio@k8s-w1:~$ kubectl -n netpoltest get deployment nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 23s
jbarrio@k8s-w1:~$ kubectl -n netpoltest get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7854ff8877-fw4gw 1/1 Running 0 113s 192.168.46.1 k8s-w2 <none> <none>
jbarrio@k8s-w1:~$
jbarrio@k8s-w1:~$ kubectl -n netpoltest run client1 --image=curlimages/curl:latest -- /bin/sh -c 'while true; do curl -m3 192.168.46.1; sleep 5; done'
pod/client1 created
jbarrio@k8s-w1:~$ kubectl -n netpoltest get pods
NAME READY STATUS RESTARTS AGE
client1 1/1 Running 0 10s
nginx-7854ff8877-fw4gw 1/1 Running 0 8m10s
jbarrio@k8s-w1:~$
kubectl -n netpoltest logs client1
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-by-default
namespace: netpoltest
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
$ kubectl create -f np-deny-all-by-default.yaml
networkpolicy.networking.k8s.io/deny-all-by-default created
curl: (28) Connection timed out after 3002 milliseconds
NOTE: as Kubernetes doesn't support to directly match a namespace by its name, we will first add a label to our namespace: kubectl label namespace netpoltest purpose=netpoltest
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-client1-to-nginx
namespace: netpoltest
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: netpoltest
podSelector:
matchLabels:
run: client1
ports:
- protocol: TCP
port: 80
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-client1-to-nginx
namespace: netpoltest
spec:
podSelector:
matchLabels:
run: client1
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
purpose: netpoltest
podSelector:
matchLabels:
app: nginx
ports:
- protocol: TCP
port: 80
kubectl create -f np/allow-egress-from-client1-to-nginx.yaml
networkpolicy.networking.k8s.io/allow-egress-from-client1-to-nginx created
kubectl create -f np/allow-ingress-from-client1-to-nginx.yaml
networkpolicy.networking.k8s.io/allow-ingress-from-client1-to-nginx created
kubectl -n netpoltest logs client1 -f
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
100 615 100 615 0 0 164k 0 --:--:-- --:--:-- --:--:-- 200k
^C
Luckily for us, there's people out there working daily to help us check and improve the security of our platforms. In the case of Kubernetes, we can use the Center for Internet Security Benchmarking to assess the initial setup of our cluster and know exactly what we can improve.
Let's download, run the tasks and inspect the results:
jbarrio@k8s-cp:~$ wget -O kb-cp.yaml https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-master.yaml
--2024-08-09 06:40:57-- https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-master.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3878 (3.8K) [text/plain]
Saving to: βkb-cp.yamlβ
kb-cp.yaml 100%[===========================================================>] 3.79K --.-KB/s in 0s
2024-08-09 06:40:57 (11.4 MB/s) - βkb-cp.yamlβ saved [3878/3878]
jbarrio@k8s-cp:~$ wget -O kb-wk.yaml https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-node.yaml
--2024-08-09 06:41:13-- https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-node.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.110.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2908 (2.8K) [text/plain]
Saving to: βkb-wk.yamlβ
kb-wk.yaml 100%[===========================================================>] 2.84K --.-KB/s in 0s
2024-08-09 06:41:13 (13.4 MB/s) - βkb-wk.yamlβ saved [2908/2908]
jbarrio@k8s-cp:~$
Inspect the YAMLs and you will find basically the creation of couple of pods with some privileges that allow them to query the host and retrieve data.
jbarrio@k8s-cp:~$ kubectl create -f kb-cp.yaml
job.batch/kube-bench-master created
jbarrio@k8s-cp:~$ kubectl create -f kb-wk.yaml
job.batch/kube-bench-node created
jbarrio@k8s-cp:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kube-bench-master-544zg 0/1 Completed 0 62s
kube-bench-node-vmh86 0/1 Completed 0 58s
jbarrio@k8s-cp:~$
Grab the logs and analyze them:
jbarrio@k8s-cp:~$ kubectl logs kube-bench-master-544zg > kb-cp-logs.txt
jbarrio@k8s-cp:~$ kubectl logs kube-bench-node-vmh86 > kb-wk-logs.txt
You will see some tests passing, some failing, some warnings:
[INFO] 1 Control Plane Security Configuration
[INFO] 1.1 Control Plane Node Configuration Files
[PASS] 1.1.1 Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)
[PASS] 1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)
[PASS] 1.1.3 Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)
[PASS] 1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)
[PASS] 1.1.5 Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)
[PASS] 1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)
[PASS] 1.1.7 Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)
[PASS] 1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)
[WARN] 1.1.9 Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)
[WARN] 1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Manual)
[PASS] 1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)
[FAIL] 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
[FAIL] 1.1.13 Ensure that the default administrative credential file permissions are set to 600 (Automated)
[FAIL] 1.1.14 Ensure that the default administrative credential file ownership is set to root:root (Automated)
[...]
== Summary master ==
36 checks PASS
11 checks FAIL
12 checks WARN
0 checks INFO
== Summary total ==
36 checks PASS
11 checks FAIL
12 checks WARN
0 checks INFO
Remove the jobs, fix the issues and re-deploy the artifacts to check how healthy your cluster is.
kubectl delete job kube-bench-master kube-bench-node
As we know from CKA, Ingresses are Kubernetes objects that manage the way requests to the cluster are processed by connecting them to Services.
Ingress objects support TLS termination as well as Load Balancing:
To be able to serve a TLS-terminater service, we will need four things:
Let's start by doing things in the reverse order and do them in a custom namespace:
kubectl create ns tls-client1kubectl create deployment tls-client1 --image=nginx --port=80 --namespace tls-client1
kubectl expose deployment tls-client1 --name=tls-client1-svc --port=80 --target-port=80 --type=ClusterIP --namespace tls-client1
service/tls-client1-svc exposed
jbarrio@k8s-cp:~$ kubectl get svc tls-client1-svc --namespace tls-client1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tls-client1-svc ClusterIP 10.108.234.35 <none> 80/TCP 5s
jbarrio@k8s-cp:~$ curl 10.108.234.35
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
jbarrio@k8s-cp:~$
jbarrio@k8s-cp:~$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout jbarrio.test.key -out jbarrio.test.crt -subj "/CN=jbarrio.test"
.....+...+...+...+.+...........+...+.+...+......+.....+.+........+...+............+......+.+......+...+.....+....+...+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*......+...........+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*.......+...+..+....+..+...+.+......+.....+...+......+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
.+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*..+.+...+..+...+.........+...+............+.+..+...+....+...+..+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*..+..+.+..+.........+......+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-----
jbarrio@k8s-cp:~$ cat jbarrio.test.crt | base64
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUREekNDQWZlZ0F3SUJBZ0lVY2N6TmhRWHBu
bWx3STk2eDZJem85a0UvUzhZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0Z6RVZNQk1HQTFVRUF3d01h
bUpoY25KcGJ5NTBaWE4wTUI0WERUSTBNRGd3T1RFM05UUXdObG9YRFRJMQpNRGd3T1RFM05UUXdO
bG93RnpFVk1CTUdBMVVFQXd3TWFtSmhjbkpwYnk1MFpYTjBNSUlCSWpBTkJna3Foa2lHCjl3MEJB
UUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUE3RGJmbEV3Tk9qa25hQjg0cTZHV2RBaSttanEzNTVlYTBF
elUKYm5acXk5VFFjMUtBazBlQkNZS0dRRGd6VmkxMDQyS1Z3MENKeTRNRDQ0ZndtZlNrdHJRT2c5
bjNOcXFLNnpWUwpQYnVDZDJUaFlxNm9FYWtHZTRZV3p0Vm5KMW56L0VvcllUc0VqRDZ3K2FlT2lM
c3l2VVQ4VWhpSGhsQzJwL0o0Cmh0ZGpNSUdtWGZlaGVadnpma291cVpmUnZaOHNkbzNuQ1FBbEQx
eStRV0oycEVxR1ZzbnZFR0ZIQTFFS1NzZ0UKbWcvV3lpMUlpL0VldlZDb3dMZnlLTDd5M3FaWG1Q
NDIvMHZwYmpQUjZWaUczY0dQNWdiUkNZd2ZvY1E3bVprRApVK2dFaUVyL0w5Y05MRWNkcmtNc0tq
WnlEWTBieEJMZ2NPSGVHcURLU282bE01NE56d0lEQVFBQm8xTXdVVEFkCkJnTlZIUTRFRmdRVUpn
UXc4dnUyUEw1WFdRM1lhUXR0ZktOWkZOd3dId1lEVlIwakJCZ3dGb0FVSmdRdzh2dTIKUEw1WFdR
M1lhUXR0ZktOWkZOd3dEd1lEVlIwVEFRSC9CQVV3QXdFQi96QU5CZ2txaGtpRzl3MEJBUXNGQUFP
QwpBUUVBQWdmRGtWWnFueGNobnNVam15NEx4bVJmK08zQXZtRzk0NytZQVRUa1BhRzU4WENqbWpS
UDNyMXN3ODZMClVLOW5kZ3kwZklvb09zekNIK3o1ajdwZ3lyay92dTlid2Rqb1VrN0pIT1k5Q3d3
QUxaTDdGOVU3eFpqYStweEYKQ1c4YnNNMGN0N1d5SGdUQmQxNWMvWXh6S3V1THFpS1lWQWdvUzcy
dHNTSkZuRmNHTkVDYkdsOC9OVnVqZHVqZAp5VkdWY25MNUNZRVI1WlhBVW9IUnFDbnpHb1h1ZW1h
M0ZoR0NadnZMcU5RNFBLZUJ1czlhTEVYSmxqSVJFZmx3CmQyUFowWmtad2Zxa1dJSStOMHBEck1S
QkR0QkdtSkpMQmUvYy9EZEhqNjFjeTR5dWRxemdQRndjb3Fla29TZTUKQk1nR3dUZi9RYWxXTWlH
R21razNzM1VTNmc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
jbarrio@k8s-cp:~$ cat jbarrio.test.key | base64
LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2QUlCQURBTkJna3Foa2lHOXcwQkFRRUZB
QVNDQktZd2dnU2lBZ0VBQW9JQkFRRHNOdCtVVEEwNk9TZG8KSHppcm9aWjBDTDZhT3Jmbmw1clFU
TlJ1ZG1yTDFOQnpVb0NUUjRFSmdvWkFPRE5XTFhUallwWERRSW5MZ3dQagpoL0NaOUtTMnRBNkQy
ZmMycW9yck5WSTl1NEozWk9GaXJxZ1JxUVo3aGhiTzFXY25XZlA4U2l0aE93U01QckQ1CnA0Nkl1
eks5UlB4U0dJZUdVTGFuOG5pRzEyTXdnYVpkOTZGNW0vTitTaTZwbDlHOW55eDJqZWNKQUNVUFhM
NUIKWW5ha1NvWld5ZThRWVVjRFVRcEt5QVNhRDliS0xVaUw4UjY5VUtqQXQvSW92dkxlcGxlWS9q
Yi9TK2x1TTlIcApXSWJkd1kvbUJ0RUpqQitoeER1Wm1RTlQ2QVNJU3Y4djF3MHNSeDJ1UXl3cU5u
SU5qUnZFRXVCdzRkNGFvTXBLCmpxVXpuZzNQQWdNQkFBRUNnZjhiWDVYZW1aeEVJd1lZdFlXR1hq
Tm1sRWVDUXFFTk5ZUmw5SUZVUzdvVWo1VHQKeUV4emI0Q0VtWnVmMVk0ZGJueHloL29vVVB6OFF6
S0ZnT1lMbS9qUDNnM1FqeHhzT1ZjMVAvaWRMc2hGRGNFUQpyb0lnM3lBMmhhakpwSnY2U01lb0g1
cm5WRVRkVVJFeE1vNzhuclR0T2hGWXgzN3RFY0xEczZydW9BY3o3TzZvCkh5SVcxS0pGRnRJekNP
eHkwdTUybTc4NkhjR3E1RFFLcWtOYjVURzhRako0cllWYzJrMWg5MGEveTR3L0RPWlcKdlluaHVR
VGZURDFDUWVYanhITm82YjlvaVRqSlpnaXhaVW5uazZEc0h6dUFaZzljV25KVzlSOUpvdWx4enRa
Ugo0K3Z0aWtvQjBQVzU0WVRXVVBLdy95K2ZZMlpDMHVCakpvN0VPYUVDZ1lFQTdVc1gwek9zZTFj
aHNBQkFyVm00ClJLQmZONWZWUFJqenRmaWxuc2M5ZTlBcDc0bzc5dU5HeUFHTU1vYmtjRkhEQmox
Vng4djk3SDQyMFJGM0svNmEKeDNtN2pwY1VYMjJ3aUpUZjkyenpYUGc5eG8vYzhnSzhqWHdsUHBS
UElZM0YzaG1kaWZrS05GaklDOEZwYVQregpsYUlWUjFlSms4MisxdnlXY1U2bnF0OENnWUVBL3RZ
QlFtTURIaDRLOGhyRFltQlR6bGkyOENTdlhPSnY1bGhCClozbmkzN0M5V0NWdFNnWE4zblR4R1pF
ZFZOck5yRUIwSUt6cEtpNkRDd2Q5RTBTNjlpUy8zRWtCTTgwakNVSmgKdFI2Tk1YL0I4OVlKUVBT
aHlxYm9sM3lsaS96d0FGZmJZa0NDOHY1SG91dWM1ZGZtNGh5MnZ1WmQya1UwTGVOTworRFIvNnhF
Q2dZRUF1azg3WlpnczFLcVV5SnRxTExGRS9KenVKYmdRdE9maWNmM0lDK0pqWTlNTkdnWnZEbWxr
CkthVU5icDU2dmJWMGFuRzk2Q0ZDUFd6Ym5Vc1pSbkdoRlAxL1JYVlppWk9XQjZiY01taFlxNlk4
MnFvWnorcFcKSU1CWWZjbjBWMlA0OTJrbFNDOUEwOWpoT0ZtamFmK2FBT0pCMHNIb1F5Ukhzb1Nm
bWxjblRnRUNnWUJDNjN2bApMUThTaDUwa09yYjRUSWQxZG9LRHlYNXJpK01Ld0Q1Y3AvdFY5cG1p
WGlHM0FKTXhTZEJPZ0hjTlkzQURQZUhBCit1YzM5b0xmRUpaZHl2eTF5cXkvY2tSb2tBUVZXR05F
SnNPNUxlMkcxTHdWWEtob0NUQ25KMHBwMm9CRDlzNWIKbk1sR2VsUGhpckhuQWExVnoyaUY5UVRN
WHNQM0VPd1o3ZDl4b1FLQmdRQ3FFZ0NQTGxsNENVMSt0aS91N0lpbQp6K1RXNit4V3ZyMEZ1bFpm
ZkJ2Tkh3QXc0Vjh3RVplUlMvTEo1L0hkaUliSFNYSkY2dHEwSHN0dlZqNlJ0OUpUClpDMGFnRkJE
V2Y2aFhoRmRKU08ya01QbEI1dzlMZ0MzTnhIYmdaZVVGdVJBSlpudE9HblNFcTl6S3g0MFYzcmQK
c29STElhai94QWhydmlJK2p2VjVSZz09Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
jbarrio@k8s-cp:~$ kubectl create secret tls tls-secret \
--namespace tls-client1 \
--cert=jbarrio.test.crt \
--key=jbarrio.test.key
secret/tls-secret created
# ingress-tls.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-jbarrio-test
namespace: tls-client1
spec:
tls:
- hosts:
- jbarrio.test
secretName: tls-secret
rules:
- host: jbarrio.test
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tls-client1-svc
port:
number: 80
jbarrio@k8s-cp:~$ kubectl create -f ingress-tls.yaml
ingress.networking.k8s.io/tls-jbarrio-test created
jbarrio@k8s-cp:~$ kubectl -n tls-client1 describe ingress tls-jbarrio-test
Name: tls-jbarrio-test
Labels: <none>
Namespace: tls-client1
Address:
Ingress Class: <none>
Default backend: <default>
TLS:
tls-secret terminates jbarrio.test
Rules:
Host Path Backends
---- ---- --------
jbarrio.test
/ tls-client1-svc:80 (192.168.228.69:80)
Annotations: <none>
Events: <none>
And this is how you serve content using TLS termination at the Ingress Controller level.
Key takeaways:
To make a cluster more secure, we need to focus on different aspects: network, permissions, etc. Let's dive into them.
We will be securing the ports used in the cluster to communicate between components.
Let's have a view of the standard ports:
CONTROL PLANE
| Port | Service |
|-------------------|
| 6443 | Kubernetes API server |
| 2379-2380 | etcd |
| 10250 | kubelet API |
| 10251 | kube-scheduler |
| 10252 | kube-controller-manager |
WORKER NODES
| Port | Service |
|-------------------|
| 10250 | kubelet API |
| 30000-32767 | NodePorts |
One of the ways of securing the ports of the cluster is doing the right network segmentation and also apply specific firewall rules that avoid exposing ports to non desired networks.
In short: if an attacker gets access to a pod, he will be be able to use the ServiceAccount to access the Kubernetes API. To prevent unwanted access, we need to define RBAC policies via Roles and ClusterRoles and attach them to our SAs.
Thus, it is very important to just assign the needed permissions for the ServiceAccount that will be used to run a pod.
In the below example we have a pod that needs to list the rest of pods in its own namespace (podsns). For this purpose, we will need to create 1) a ServiceAccount that will be used in the pod, 2) a Role that allows only pod listing in the specified namespace and 3) a RoleBinding to link the role to the account. Finally, we will assign the new SA to the pod:
$ kubectl create ns podsns
namespace podsns created
$ kubectl -n podsns create sa pod-monitor
serviceaccount/pod-monitor created
$ kubectl -n podns create role pod-monitor-role --verb=list --resource=pods
role.rbac.authorization.k8s.io/pod-monitor-role created
$ kubectl -n podsns create rolebinding pod-monitor-role-binding --role=pod-monitor-role --serviceaccount=podsns:pod-monitor
rolebinding.rbac.authorization.k8s.io/pod-monitor-role-binding created
Host Namespaces are a Linux feature that provides isolation for various system resources. When you create a namespace, you effectively create a separate instance of a particular resource that processes in that namespace can use, isolated from other namespaces. There are several types of namespaces, each isolating a different aspect of the system.
Container namespaces are created when a container is launched. These namespaces isolate the container's view of system resources from the host and other containers.
Types of Container Namespaces:
Cgroups, short for control groups, are a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. While namespaces provide isolation of resources, cgroups manage and limit the usage of these resources.
When specifying limits in a pod definition, internally Kubernetes uses the cgroups interface to create cgroups to limit the hardware usage.
There will be times when you will need to instantiate pods that run with privileged permissions, such as when accessing specific hardware accessories, or similar.
But before going deeper, let's first review what privileged mode vs non-privileged mode vs real mode means in current computing:
Privileged Mode (Kernel Mode):
Real Mode: The original mode of the Intel 8086 CPU (introduced in 1978), allowing direct access to memory and hardware without protection or multitasking capabilities. All software ran in the same mode without distinction between privileged and non-privileged operations, leading to potential system instability.
Giving a quick overview of the implementation in ARM for short reference:
ARM 64 (ARMv8-A):
Using a graph, it would look like:
To enable privileged mode, a pod needs to explicitely specify it in its definition:
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: nginx
image: nginx
securityContext:
privileged: true
VPC Structure:
10.0.0.0/16Subnets:
10.0.1.0/24 (Private, no direct internet access)10.0.2.0/24 (Private, controlled egress)10.0.3.0/24 (Public-facing services with controlled access)10.0.4.0/24 (Bastion host, VPN gateway, monitoring tools)10.0.5.0/24 (Dedicated for storage services and persistent volumes)VLANs (if applicable):
This setup ensures strict isolation between different components of the Kubernetes cluster, reducing the attack surface and securing critical resources.
AppArmor is a Linux security module that allows you to restrict the capabilities of programs, including Kubernetes Pods, by enforcing specific security policies. It works by defining profiles that dictate what resources a program can access, such as files, network interfaces, or system capabilities.
Examples in Kubernetes:
Restricting File Access: You can create an AppArmor profile that limits a Pod to only access specific files or directories. For example, a Pod running a web server might be restricted to only read from /var/www and not access any other part of the filesystem.
Limiting Network Access: An AppArmor profile could prevent a Pod from opening certain network ports or using specific network protocols. This can be useful if you want to ensure that a Pod can only communicate over HTTP and not perform other network activities.
In Kubernetes, AppArmor profiles are applied to Pods via annotations if using Kubernetes <= v1.29:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/container-name: localhost/profile-name
When running version >= v1.30, you can specify via securityContext:
[...]
spec:
securityContext:
appArmorProfile:
type: localhost
localhostProfile: name-of-apparmor-profile
containers:
- name: nginx
image: nginx
[...]
This way, AppArmor helps in securing Kubernetes Pods by reducing the attack surface and limiting what a compromised container can do.
Topics we will be covering:
When you specify a SecurityContext for a pod, you are enforcing the control settings and security levels you limit this pod to have.
Let's start by creating a pod with some securityContext options:
$ cat nginx-secure.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-secure
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
securityContext:
runAsUser: 1000 # Run the container as a specific user ID (non-root)
runAsGroup: 1000 # Run the container as a specific group ID
allowPrivilegeEscalation: false # Prevent the container from gaining additional privileges
readOnlyRootFilesystem: true # Mount the root filesystem as read-only
capabilities:
drop:
- NET_RAW # Drop the capability to use raw networking
securityContext:
seccompProfile:
type: RuntimeDefault # Use the default seccomp profile for the runtime
Let's create an appArmor profile for Nginx and apply it to an Nginx pod specifying several securityContenxt parameters:
$ cat /etc/apparmor.d/nginx-profile
# Profile Name
profile nginx-profile /usr/sbin/nginx {
# Allow read access to nginx configuration files
/etc/nginx/nginx.conf r,
/etc/nginx/conf.d/* r,
/etc/nginx/sites-enabled/* r,
# Allow read access to SSL certificates and keys
/etc/ssl/** r,
# Allow read access to web content
/var/www/** r,
# Allow writing to the nginx PID file
/var/run/nginx.pid rw,
# Allow read/write access to log files
/var/log/nginx/* rw,
# Allow binding to HTTP and HTTPS ports
capability net_bind_service,
# Allow network access (for handling requests)
network inet stream,
network inet6 stream,
# Deny everything else by default
deny /bin/* r,
deny /sbin/* r,
deny /usr/bin/* r,
deny /usr/sbin/* r,
deny /usr/lib/* r,
deny /lib/* r,
deny /var/* rw,
}
$ sudo apparmor_parser /etc/apparmor.d/nginx-profile
NOTE: this was deprecated.
Pod Security Policies allows us, the cluster administrators, to control what security-related configurations will be automatically allowed to be ran by the Pods.
Things that we can include in a Pod Security Policy are:
This controller is disabled by default. To enable we need to add the --enable-admission-plugins= to the API server. Beware that enabling this without creating any policy nor allowing users to use them, will result in no Pods allowed to be created!
Some available plugins are: PodSecurity,LimitRanger,ServiceAccount, NodeRestriction, ...
A PodSecurityPolicy is just a Kubernetes object that can be defined in a YAML:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp-runas-any-non-privileged
spec:
privileged: false # Disallow privileged containers
allowPrivilegeEscalation: false # Disallow privilege escalation
runAsUser:
rule: MustRunAsNonRoot # Ensure the container runs as a non-root user
seLinux:
rule: RunAsAny # Allow any SELinux options
volumes:
- 'configMap'
- 'emptyDir'
- 'secret'
- 'persistentVolumeClaim'
- 'downwardAPI'
Create a namespace and a ServiceAccount, and then create a ClusterRole that allows the usage of the above PodSecurityPolicy and link it to the SA created in the test-psp namespace:
apiVersion: v1
kind: ServiceAccount
metadata:
name: psp-user
namespace: test-psp
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: use-psp-runas-any-non-privileged
rules:
- apiGroups:
- policy
resources:
- podsecuritypolicies
resourceNames:
- psp-runas-any-non-privileged
verbs:
- use
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: bind-psp-to-sa
namespace: test-psp
subjects:
- kind: ServiceAccount
name: psp-user
namespace: test-psp
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: use-psp-runas-any-non-privileged
Open Policy Agent Gatekeeper allows us to enforce customizable policies on any kubernetes object at creation time. They are defined using the OPA Constraint Framework.
With OPA Gatekeeper you can:
jbarrio@k8s-cp:~$ kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.0/deploy/gatekeeper.yaml
namespace/gatekeeper-system created
resourcequota/gatekeeper-critical-pods created
customresourcedefinition.apiextensions.k8s.io/assign.mutations.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/assignimage.mutations.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/assignmetadata.mutations.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/configs.config.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/constraintpodstatuses.status.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/constrainttemplatepodstatuses.status.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/constrainttemplates.templates.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/expansiontemplate.expansion.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/expansiontemplatepodstatuses.status.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/modifyset.mutations.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/mutatorpodstatuses.status.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/providers.externaldata.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/syncsets.syncset.gatekeeper.sh created
serviceaccount/gatekeeper-admin created
role.rbac.authorization.k8s.io/gatekeeper-manager-role created
clusterrole.rbac.authorization.k8s.io/gatekeeper-manager-role created
rolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebinding created
secret/gatekeeper-webhook-server-cert created
service/gatekeeper-webhook-service created
deployment.apps/gatekeeper-audit created
deployment.apps/gatekeeper-controller-manager created
poddisruptionbudget.policy/gatekeeper-controller-manager created
mutatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-validating-webhook-configuration created
jbarrio@k8s-cp:~$
This is an example of a contraint template that will deny the usage of the host network:
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8sdenypodswithhostnetwork
spec:
crd:
spec:
names:
kind: K8sDenyPodsWithHostNetwork
validation:
openAPIV3Schema:
properties:
message:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sdenypodswithhostnetwork
violation[{"msg": msg}] {
input.review.object.spec.hostNetwork == true
msg := sprintf("Host network is not allowed in pod %s", [input.review.object.metadata.name])
}
To enforce it in our cluster, define a Constraint:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDenyPodsWithHostNetwork
metadata:
name: deny-hostnetwork-pods
spec:
enforcementAction: deny
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
As we know from CKA, a Container Runtime is a software that is reponsible from running containers on a host operating system. It manages the container lifecycle, including creating, starting, stopping, and deleting containers. Container runtimes are a crucial part of container orchestration systems like Kubernetes, as they provide the necessary functionality to execute containerized applications in a consistent and isolated manner.
A Container Runtime Sandbox is a security-focus runtime that allows to restrict how containers interact with the host. They are very useful when you must run untrusted workloads on your cluster, but using a sandbox normally comes with a performance penalty, so we don't use it by default, only when needed.
An example of CRS is the combination of gVisor and runsc. gVisor It is a Linux application kernel that runs within the host as a layer between the OS and the containers. runsc is an OCI-compliant container runtime that integrates gVisor with applications like Kubernetes.
Kata containers is a different implementation: it creates a lightweight VM and runs the container transparently inside it.
To build a runtime sandbox, we need to follow three steps:
You need to do this in all nodes:
root@k8s-cp:~# curl -fsSL https://gvisor.dev/archive.key | sudo gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg
root@k8s-cp:~#
root@k8s-cp:~# echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases release main" | sudo tee /etc/apt/sources.list.d/gvisor.list > /dev/null
root@k8s-cp:~# apt-get update
Hit:1 http://ports.ubuntu.com/ubuntu-ports jammy InRelease
Hit:2 https://download.docker.com/linux/ubuntu jammy InRelease
Hit:3 http://ports.ubuntu.com/ubuntu-ports jammy-updates InRelease
Hit:4 http://ports.ubuntu.com/ubuntu-ports jammy-backports InRelease
Get:6 http://ports.ubuntu.com/ubuntu-ports jammy-security InRelease [129 kB]
Get:7 https://storage.googleapis.com/gvisor/releases release InRelease [4,132 B]
Hit:5 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.29/deb InRelease
Get:8 https://storage.googleapis.com/gvisor/releases release/main arm64 Packages [512 B]
Fetched 134 kB in 1s (217 kB/s)
Reading package lists... Done
root@k8s-cp:~# apt-get install -y runsc
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
runsc
0 upgraded, 1 newly installed, 0 to remove and 46 not upgraded.
Need to get 51.5 MB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 https://storage.googleapis.com/gvisor/releases release/main arm64 runsc arm64 20240820.0 [51.5 MB]
Fetched 51.5 MB in 4s (13.2 MB/s)
Selecting previously unselected package runsc.
(Reading database ... 116197 files and directories currently installed.)
Preparing to unpack .../runsc_20240820.0_arm64.deb ...
Unpacking runsc (20240820.0) ...
Setting up runsc (20240820.0) ...
Scanning processes...
Scanning linux images...
No services need to be restarted.
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
root@k8s-cp:~#
# add
disabled_plugins = ["io.containerd.internal.v1.restart"]
# add to the [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] section
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"
# change
[plugins."io.containerd.runtime.v1.linux"]
shim_debug = true
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: runsc-sb
handler: runsc
jbarrio@k8s-cp:~$ kubectl create -f runsc-sb.yaml
runtimeclass.node.k8s.io/runsc-sb created
jbarrio@k8s-cp:~$ kubectl run bb-no-sb --image=busybox --dry-run=client -o yaml -- /bin/sh -c "while true; do sleep 3; echo hello; done"
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: bb-no-sb
name: bb-no-sb
spec:
containers:
- args:
- /bin/sh
- -c
- while true; do sleep 3; echo hello; done
image: busybox
name: bb-no-sb
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
jbarrio@k8s-cp:~$ kubectl run bb-no-sb --image=busybox --dry-run=client -o yaml -- /bin/sh -c "while true; do sleep 3; echo hello; done" >> bb-no-sb.yaml
jbarrio@k8s-cp:~$ kubectl create -f bb-no-sb.yaml
pod/bb-no-sb created
jbarrio@k8s-cp:~$ kubectl get pod bb-no-sb
NAME READY STATUS RESTARTS AGE
bb-no-sb 1/1 Running 0 8s
jbarrio@k8s-cp:~$
jbarrio@k8s-cp:~$ cat bb-sb.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: bb-sb
name: bb-sb
spec:
runtimeClassName: runsc-sb
containers:
- args:
- /bin/sh
- -c
- while true; do sleep 3; echo hello; done
image: busybox
name: bb-no-sb
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
jbarrio@k8s-cp:~$ kubectl create -f bb-sb.yaml
pod/bb-sb created
jbarrio@k8s-cp:~$ kubectl get pod bb-sb
NAME READY STATUS RESTARTS AGE
bb-sb 1/1 Running 0 14s
jbarrio@k8s-cp:~$
jbarrio@k8s-cp:~$ kubectl exec bb-no-sb -- dmesg
dmesg: klogctl: Operation not permitted
command terminated with exit code 1
jbarrio@k8s-cp:~$ kubectl exec bb-sb -- dmesg
[ 0.000000] Starting gVisor...
[ 0.564598] Conjuring /dev/null black hole...
[ 0.699256] Mounting deweydecimalfs...
[ 0.888110] Preparing for the zombie uprising...
[ 1.021069] Committing treasure map to memory...
[ 1.262593] Forking spaghetti code...
[ 1.444678] Searching for needles in stacks...
[ 1.574919] Moving files to filing cabinet...
[ 1.717879] Waiting for children...
[ 1.912068] Feeding the init monster...
[ 2.114108] Consulting tar man page...
[ 2.139353] Setting up VFS...
[ 2.202439] Setting up FUSE...
[ 2.632834] Ready!
mTLS stands for Mutual Transport Layer Security and it basically means that both ends are authenticated against each other, ensuring all communications are encrypted.
A requestor creates a CSR objec to request a new certificate.
This CSR can be approved or denied.
We can implement RBAC to manage permissions related to CSRs.
Creating a CSR:
$ sudo apt-get install -y golang-cfssl
$ cat <<EOF | cfssl genkey - | cfssljson -bare server
{
"CN": "example.com",
"key": {
"algo": "ecdsa",
"size": 256
},
"names": [
{
"O": "system:nodes"
}
],
"hosts": [
"example.com",
"www.example.com"
]
}
EOF
2024/08/23 11:49:24 [INFO] generate received request
2024/08/23 11:49:24 [INFO] received CSR
2024/08/23 11:49:24 [INFO] generating key: rsa-2048
2024/08/23 11:49:24 [INFO] encoded CSR
$
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: example-csr
spec:
request: |
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQklUQ0J5UUlCQURBdE1SVXdF
d1lEVlFRS0V3eHplWE4wWlcwNmJtOWtaWE14RkRBU0JnTlZCQU1UQzJWNApZVzF3YkdVdVkyOXRN
Rmt3RXdZSEtvWkl6ajBDQVFZSUtvWkl6ajBEQVFjRFFnQUV2K1ZNc0NORG0yd0JKeDNoCnBFbjdX
bkxlY2ZoVE1QL2hTZmpZTUxmL3FTdEhoWlVmSjg0WXllNCtkNDdBa2NLNHNZYmw4K1RielNaTEJW
d0sKdGNMdkdLQTZNRGdHQ1NxR1NJYjNEUUVKRGpFck1Da3dKd1lEVlIwUkJDQXdIb0lMWlhoaGJY
QnNaUzVqYjIyQwpEM2QzZHk1bGVHRnRjR3hsTG1OdmJUQUtCZ2dxaGtqT1BRUURBZ05IQURCRUFp
QmxMTXgrUUF4aFlvZnlZdmR4CmkzUkJEdjcrWjdQM1RzNjFOQWlDVC9mRDVRSWdMbU1VL3B3Z1VY
b3czQXZ4VG4yVTN2ZVZJNEhmNU53MnVrN1oKN1JzMC9Zdz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUg
UkVRVUVTVC0tLS0tCg==
signerName: kubernetes.io/kube-apiserver-client
usages:
- digital signature
- key encipherment
- client auth
jbarrio@k8s-cp:~$ kubectl create -f tls-csr.yaml
certificatesigningrequest.certificates.k8s.io/example-csr created
jbarrio@k8s-cp:~$ kubectl get csr example-csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
example-csr 3s kubernetes.io/kube-apiserver-client kubernetes-admin <none> Pending
jbarrio@k8s-cp:~$ kubectl certificate approve example-csr
certificatesigningrequest.certificates.k8s.io/example-csr approved
jbarrio@k8s-cp:~$ kubectl get csr example-csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
example-csr 7s kubernetes.io/kube-apiserver-client kubernetes-admin <none> Approved,Issued
$ kubectl get csr example-csr -o yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
creationTimestamp: "2024-08-23T14:22:24Z"
name: example-csr
resourceVersion: "564595"
uid: 6155dbee-34d6-45b3-a6dd-e8439c14cd92
spec:
groups:
- kubeadm:cluster-admins
- system:authenticated
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQklUQ0J5UUlCQURBdE1SVXdFd1lEVlFRS0V3eHplWE4wWlcwNmJtOWtaWE14RkRBU0JnTlZCQU1UQzJWNApZVzF3YkdVdVkyOXRNRmt3RXdZSEtvWkl6ajBDQVFZSUtvWkl6ajBEQVFjRFFnQUV2K1ZNc0NORG0yd0JKeDNoCnBFbjdXbkxlY2ZoVE1QL2hTZmpZTUxmL3FTdEhoWlVmSjg0WXllNCtkNDdBa2NLNHNZYmw4K1RielNaTEJWd0sKdGNMdkdLQTZNRGdHQ1NxR1NJYjNEUUVKRGpFck1Da3dKd1lEVlIwUkJDQXdIb0lMWlhoaGJYQnNaUzVqYjIyQwpEM2QzZHk1bGVHRnRjR3hsTG1OdmJUQUtCZ2dxaGtqT1BRUURBZ05IQURCRUFpQmxMTXgrUUF4aFlvZnlZdmR4CmkzUkJEdjcrWjdQM1RzNjFOQWlDVC9mRDVRSWdMbU1VL3B3Z1VYb3czQXZ4VG4yVTN2ZVZJNEhmNU53MnVrN1oKN1JzMC9Zdz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUgUkVRVUVTVC0tLS0tCg==
signerName: kubernetes.io/kube-apiserver-client
usages:
- digital signature
- key encipherment
- client auth
username: kubernetes-admin
status:
certificate: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNnRENDQVdpZ0F3SUJBZ0lRQ2ppOW5UcW5vdnMwc3pONHFNeUNIekFOQmdrcWhraUc5dzBCQVFzRkFEQVYKTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1CNFhEVEkwTURneU16RTBNVGN6TUZvWERUSTFNRGd5TXpFMApNVGN6TUZvd0xURVZNQk1HQTFVRUNoTU1jM2x6ZEdWdE9tNXZaR1Z6TVJRd0VnWURWUVFERXd0bGVHRnRjR3hsCkxtTnZiVEJaTUJNR0J5cUdTTTQ5QWdFR0NDcUdTTTQ5QXdFSEEwSUFCTC9sVExBalE1dHNBU2NkNGFSSisxcHkKM25INFV6RC80VW40MkRDMy82a3JSNFdWSHlmT0dNbnVQbmVPd0pIQ3VMR0c1ZlBrMjgwbVN3VmNDclhDN3hpagpmekI5TUE0R0ExVWREd0VCL3dRRUF3SUZvREFUQmdOVkhTVUVEREFLQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CCkFmOEVBakFBTUI4R0ExVWRJd1FZTUJhQUZHTnFzaVhoT1JJdmdmNFhNQ0w2RDRyM3daQTdNQ2NHQTFVZEVRUWcKTUI2Q0MyVjRZVzF3YkdVdVkyOXRnZzkzZDNjdVpYaGhiWEJzWlM1amIyMHdEUVlKS29aSWh2Y05BUUVMQlFBRApnZ0VCQUdmcDhJeTFUVUpYY2J1RkxlbWRPOWNuYWRiSkdDaVVVRzBELzFMeE1mZ0RBR2tZWjdyUVZOQkM1YmwwCnUxdWszL2xzOXpERUZmTCtKS29ESUtkK2RHM3Y1YzYzT3krcGRudmtWNlBjdVRKVVRnK21XejQ1RFVDU0hqYmkKcXc4bTV1TjFYbVV4NjNQcmQzUFpWZHpxRWpacExvNk9BWHNpYk82TCttSWd6V0xLQjNGTTIyWUo5b05YVENEYgppdWpBcGZkczBISCtmRUl2NnlqWXFsSVN0R2VEUzVPdnlITlFvUnRsSjNjUjhQYXlyNGIyaW5iZSs5RUxyZ0pECkxLeEFmUkNVSFlaNWZta3kydG9iQ2xLejlqM2czVFB1SUx6L0YyVWJzanpGMmpRVXd0VkZsWktNSkxkeTE0bWcKdFY4bmVVOEpUaHJhSlR6SzBxK0ovaHN0dEFRPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
conditions:
- lastTransitionTime: "2024-08-23T14:22:30Z"
lastUpdateTime: "2024-08-23T14:22:30Z"
message: This CSR was approved by kubectl certificate approve.
reason: KubectlApprove
status: "True"
type: Approved
We will cover these topics:
Just remember that you can simply reference a container image followed by its hash like imageName:tag@sha256:hash.
While there are several security tools to perform static analysis, we will focus just on some manual analysis.
Things to try to look for:
Things to avoid:
Trivy is a CLI tool that scans container images looking for vulnerabilities:
sudo apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy
root@k8s-cp:~# trivy image nginx:1.19.10
2024-08-24T16:28:19Z INFO [vuln] Vulnerability scanning is enabled
2024-08-24T16:28:19Z INFO [secret] Secret scanning is enabled
2024-08-24T16:28:19Z INFO [secret] If your scanning is slow, please try '--scanners vuln' to disable secret scanning
2024-08-24T16:28:19Z INFO [secret] Please see also https://aquasecurity.github.io/trivy/v0.54/docs/scanner/secret#recommendation for faster secret detection
2024-08-24T16:28:25Z INFO Detected OS family="debian" version="10.9"
2024-08-24T16:28:25Z INFO [debian] Detecting vulnerabilities... os_version="10" pkg_num=135
2024-08-24T16:28:26Z INFO Number of language-specific files num=0
2024-08-24T16:28:26Z WARN Using severities from other vendors for some vulnerabilities. Read https://aquasecurity.github.io/trivy/v0.54/docs/scanner/vulnerability#severity-selection for details.
2024-08-24T16:28:28Z WARN This OS version is no longer supported by the distribution family="debian" version="10.9"
2024-08-24T16:28:28Z WARN The vulnerability detection may be insufficient because security updates are not provided
nginx:1.19.10 (debian 10.9)
===========================
Total: 580 (UNKNOWN: 9, LOW: 157, MEDIUM: 209, HIGH: 159, CRITICAL: 46)
ββββββββββββββββββββββββββ¬ββββββββββββββββββββββ¬βββββββββββ¬βββββββββββββββ¬ββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Library β Vulnerability β Severity β Status β Installed Version β Fixed Version β Title β
ββββββββββββββββββββββββββΌββββββββββββββββββββββΌβββββββββββΌβββββββββββββββΌββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β apt β CVE-2011-3374 β LOW β affected β 1.8.2.3 β β It was found that apt-key in apt, all versions, do not β
β β β β β β β correctly... β
β β β β β β β https://avd.aquasec.com/nvd/cve-2011-3374 β
ββββββββββββββββββββββββββΌββββββββββββββββββββββ€ β βββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β bash β CVE-2019-18276 β β β 5.0-4 β β bash: when effective UID is not equal to its real UID the... β
β β β β β β β https://avd.aquasec.com/nvd/cve-2019-18276 β
β βββββββββββββββββββββββ€ β β βββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β TEMP-0841856-B18BAF β β β β β [Privilege escalation possible to other user than root] β
β β β β β β β https://security-tracker.debian.org/tracker/TEMP-0841856-B1- β
β β β β β β β 8BAF β
ββββββββββββββββββββββββββΌββββββββββββββββββββββΌβββββββββββΌβββββββββββββββΌββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
As the Admission Controller intercepts requests to the API, we can leverage it use to approve, deny or modify those requests before the changes are really done.
We can use the ImagePolicyWebhook so its controller will send requests to an external webhook containing information about the image being used, so the webhook can approve or deny the creation of the pod.
To make this work, we need the admission controller and an application that will validate/deny the images. Let's jump into it:
jbarrio@k8s-cp:~$ sudo mkdir /etc/kubernetes/admission-control
[sudo] password for jbarrio:
jbarrio@k8s-cp:~$ sudo wget -O /etc/kubernetes/admi
admin.conf admission-control/
jbarrio@k8s-cp:~$ sudo wget -O /etc/kubernetes/admission-control/imagepolicywebhook-ca.crt https://raw.githubusercontent.com/linuxacademy/content-cks-trivy-k8s-webhook/main/certs/ca.crt
--2024-08-24 17:02:28-- https://raw.githubusercontent.com/linuxacademy/content-cks-trivy-k8s-webhook/main/certs/ca.crt
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.111.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1704 (1.7K) [text/plain]
Saving to: β/etc/kubernetes/admission-control/imagepolicywebhook-ca.crtβ
/etc/kubernetes/admission-control 100%[===========================================================>] 1.66K --.-KB/s in 0s
2024-08-24 17:02:43 (6.12 MB/s) - β/etc/kubernetes/admission-control/imagepolicywebhook-ca.crtβ saved [1704/1704]
jbarrio@k8s-cp:~$ sudo wget -O /etc/kubernetes/admission-control/api-server-client.crt https://raw.githubusercontent.com/linuxacademy/content-cks-trivy-k8s-webhook/main/certs/api-server-client.crt
--2024-08-24 17:02:57-- https://raw.githubusercontent.com/linuxacademy/content-cks-trivy-k8s-webhook/main/certs/api-server-client.crt
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1342 (1.3K) [text/plain]
Saving to: β/etc/kubernetes/admission-control/api-server-client.crtβ
/etc/kubernetes/admission-control 100%[===========================================================>] 1.31K --.-KB/s in 0s
2024-08-24 17:03:05 (36.7 MB/s) - β/etc/kubernetes/admission-control/api-server-client.crtβ saved [1342/1342]
jbarrio@k8s-cp:~$ sudo wget -O /etc/kubernetes/admission-control/api-server-client.key https://raw.githubusercontent.com/linuxacademy/content-cks-trivy-k8s-webhook/main/certs/api-server-client.key
--2024-08-24 17:03:17-- https://raw.githubusercontent.com/linuxacademy/content-cks-trivy-k8s-webhook/main/certs/api-server-client.key
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.110.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1679 (1.6K) [text/plain]
Saving to: β/etc/kubernetes/admission-control/api-server-client.keyβ
/etc/kubernetes/admission-control 100%[===========================================================>] 1.64K --.-KB/s in 0s
2024-08-24 17:03:25 (11.6 MB/s) - β/etc/kubernetes/admission-control/api-server-client.keyβ saved [1679/1679]
jbarrio@k8s-cp:~$ cat /etc/kubernetes/admission-control/admission-control.conf
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
configuration:
imagePolicy:
kubeConfigFile: /etc/kubernetes/admission-control/imagepolicywebhook_backend.kubeconfig
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: true
jbarrio@k8s-cp:~$ cat /etc/kubernetes/admission-control/imagepolicywebhook_backend.kubeconfig
apiVersion: v1
kind: Config
clusters:
- name: trivy-k8s-webhook
cluster:
certificate-authority: /etc/kubernetes/admission-control/imagepolicywebhook-ca.crt
server: https://acg.trivy.k8s.webhook:8090/scan
contexts:
- name: trivy-k8s-webhook
context:
cluster: trivy-k8s-webhook
user: api-server
current-context: trivy-k8s-webhook
preferences: {}
users:
- name: api-server
user:
client-certificate: /etc/kubernetes/admission-control/api-server-client.crt
client-key: /etc/kubernetes/admission-control/api-server-client.key
jbarrio@k8s-cp:~$
- --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
- --admission-control-config-file=/etc/kubernetes/admission-control/admission-control.conf
volumes:
[...]
- hostPath:
path: /etc/kubernetes/admission-control
type: DirectoryOrCreate
name: admission-control
volumeMounts:
[...]
- mountPath: /etc/kubernetes/admission-control
name: admission-control
readOnly: true
When we want to limit the sources from we will allow container images to be downloaded, we can use OPA gatekeeper:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sallowedregistries
spec:
crd:
spec:
names:
kind: K8sAllowedRegistries
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
repos:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sallowedrepos
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
satisfied := [good | repo = input.parameters.repos[_] ; good = startswith(container.image,repo)]
not any(satisfied)
msg := sprintf("container <%v> has an invalid image repo <%v>, allowed repos are %v",[container.name, container.image, input.parameters.repos])
}
violation[{"msg": msg}] {
container := input.review.object.spec.initContainers[_]
satisfied := [good | repo = input.parameters.repos[_] ; good = startswith(container.image,repo)]
not any(satisfied)
msg := sprintf("container <%v> has an invalid image repo <%v>, allowed repos are %v",[container.name, container.image, input.parameters.repos])
}
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRegistries
metadata:
name: allowed-image-registries
spec:
enforcementAction: deny
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
parameters:
allowedRegistries:
- "mygit.org"
- "myregistry.org"
jbarrio@k8s-cp:~$ kubectl create -f constraint-tpl-1.yaml
constrainttemplate.templates.gatekeeper.sh/k8sallowedregistries created
jbarrio@k8s-cp:~$ kubectl create -f allowed-registries.yaml
k8sallowedregistries.constraints.gatekeeper.sh/allowed-image-registries created
jbarrio@k8s-cp:~$ kubectl run bb-forbidden-repo --image=gcr.io/google-containers/busybox
pod/bb-forbidden-repo created
jbarrio@k8s-cp:~$ kubectl get pod bb-forbidden-repo
NAME READY STATUS RESTARTS AGE
bb-forbidden-repo 0/1 CrashLoopBackOff 1 (5s ago) 10s
jbarrio@k8s-cp:~$ kubectl describe pod bb-forbidden-repo
Name: bb-forbidden-repo
Namespace: default
Priority: 0
Service Account: default
Node: k8s-w1/172.16.199.131
Start Time: Fri, 23 Aug 2024 15:10:55 +0000
Labels: run=bb-forbidden-repo
Annotations: cni.projectcalico.org/containerID: 2a7f37ca460460a324c9349c4422b0257960eafc109ae8e40aa9f66bb419bff6
cni.projectcalico.org/podIP: 192.168.228.80/32
cni.projectcalico.org/podIPs: 192.168.228.80/32
Status: Running
IP: 192.168.228.80
IPs:
IP: 192.168.228.80
Containers:
bb-forbidden-repo:
Container ID: containerd://293916ccd03a13ffac8d83066babfd404ed96e8d540672ce6febb655d9607de3
Image: gcr.io/google-containers/busybox
Image ID: sha256:36a4dca0fe6fb2a5133dc11a6c8907a97aea122613fa3e98be033959a0821a1f
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 23 Aug 2024 15:11:00 +0000
Finished: Fri, 23 Aug 2024 15:11:00 +0000
Ready: False
Restart Count: 1
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fqm6q (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-fqm6q:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 20s default-scheduler Successfully assigned default/bb-forbidden-repo to k8s-w1
Normal Pulled 17s kubelet Successfully pulled image "gcr.io/google-containers/busybox" in 2.116s (2.116s including waiting)
Normal Started 15s (x2 over 17s) kubelet Started container bb-forbidden-repo
Normal Pulled 15s kubelet Successfully pulled image "gcr.io/google-containers/busybox" in 1.159s (1.159s including waiting)
Warning BackOff 13s (x2 over 14s) kubelet Back-off restarting failed container bb-forbidden-repo in pod bb-forbidden-repo_default(01753ded-8f25-4d2b-a3ae-ed13b75d1a7e)
Normal Pulling 2s (x3 over 20s) kubelet Pulling image "gcr.io/google-containers/busybox"
Normal Created 0s (x3 over 17s) kubelet Created container bb-forbidden-repo
Normal Pulled 0s kubelet Successfully pulled image "gcr.io/google-containers/busybox" in 2.345s (2.345s including waiting)
jbarrio@k8s-cp:~$
Topics covered:
When monitoring to look for specific malicious activity, we can use Falco, which is a tool that allows us to get information about read/write access to defined locations, warn about privilege escalation and execution of suspicious binaries such as opening a shell.
[BLOCK TO INSTALL FALCO]
Immutable or stateless containers are containers that don't change during their lifetime. Instead of changing, they are replaced with new containers. These kind of containers do not depend on host non immutable resources that require privileged access.
We should be able to make our containers immutable or at least to make the runtime code immutable and disallow the impossibility of installing software on them.
We cannot consider a container to be immutable if any of these properties are enabled:
A very interesting option is to use the readOnlyRootFilesystem: true when defining a Pod, because this way we will enforce the container root filesystem to be read-only and will avoid installing stuff.
Audit logging, in short, means that when an event happens in our Kubernetes cluster, this is recorded as a log, in this case, an Audit Log Event.
When configuring Audit Logging, we can do it by passing arguments to the kube-apiserver configuration:
--audit-policy-file
--audit-log-path
--audit-log-maxage
--audit-log-maxbackup
We can create an Audit Policy to include a set of rules that determines which events are logged and how verbose are:
kind: Policy
rules:
# log request and response for all changes made to namespaces
- level: RequestResponse
resources:
- group: ""
resources: ["namespaces"]
# log only requests on pods and services in the web namespace
- level: Request
resources:
- group: ""
resources: ["pods", "services"]
namespaces: ["web"]
# log all metadata changes done to secrets
- level: Metadata
resources:
- group: ""
resources: ["secrets"]
# catch-all rule to log all metadata
- level: Metadata