How do i use a kubernetes liveness command for application health checks?
-
A liveness command to check the health of an application. It all very well having the container running in a pod but application checks need to be in place too.
Let's break down the code below:
- create a container running the busybox image
- get it to execute command to create file
- wait 30 seconds
- remove the file (to trigger simulate a failure)
- wait 300 seconds.
apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: k8s.gcr.io/busybox args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 3 periodSeconds: 5
Create the pod.
$ kubectl create -f livenesscmd.yaml
Looking at the events, you can see the pod being creating going into a unhealthy state, and then starting a new container.
$ kubectl describe pod liveness
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1m default-scheduler Successfully assigned liveness-exec to minikube Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "default-token-fpffp" Warning Unhealthy 33s (x3 over 43s) kubelet, minikube Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory Normal Pulling 2s (x2 over 1m) kubelet, minikube pulling image "k8s.gcr.io/busybox" Normal Pulled 2s (x2 over 1m) kubelet, minikube Successfully pulled image "k8s.gcr.io/busybox" Normal Created 2s (x2 over 1m) kubelet, minikube Created container Normal Killing 2s kubelet, minikube Killing container with id docker://liveness:Container failed liveness probe.. Container will be killed and recreated. Normal Started 1s (x2 over 1m) kubelet, minikube Started container
As well as the Liveness command, there are two other types of Liveness checks, Liveness HTTP Request and TCP Liveness Probe.
© Lightnetics 2024