Show Menu

Kubernetes - CKAD Cheat Sheet (DRAFT) by

Commands that will help you during the CKAD exam.

This is a draft cheat sheet. It is a work in progress and is not finished yet.


alias k=kubectl
k api-re­sources
k explain node
explain is intros­pection docume­ntation
k explain node.spec


k get po
k get po -o wide
k get po <pod name> -o yaml
output in yaml format
k get po <po­d-n­ame> -o yaml > pod.yaml
output in yaml format in a file
k get pods -n <po­d-n­ame>
k get pods --all-­­na­m­e­spaces
kubectl get pods -A
shorter version of
kubectl get events -A | grep error
all events in all namespaces with errors
k get pod --show­-labels
k get pod -l <ke­y>=­<va­lue>
k describe po <pod name>
k delete po <pod name>
k delete po --all
k apply -f <po­d.y­aml> -n <na­mes­pac­e-n­ame>
k edit po <po­d-n­ame>
k run nginx --imag­e=nginx --dry-­run­=client -o yaml > pod.yaml
k exec [POD] -- [COMMAND]
k exec nginx -- ls /
k exec -it nginx -- /bin/sh
Get a shell to the container running in your Pod
If you are not given a pod definition file but a pod, you may extract the definition to a file.

In a multic­ont­ainer pod, containers are created and destroyed together.

initCo­nta­iners is a property under spec and it has same sub properties like name, image etc.

kubectl run command creates the pod and not deploy­ment. There is no imperative command like kubectl create po for pod creation.

Pods have a property called restar­tPolicy whose values can be Always, Never or OnFailure

Replica set

k apply -f <re­pli­cas­et-­def­ini­tio­n.y­aml>
k get rs
k get rs -o wide
k get rs -o yaml
k delete rs <re­pli­cas­et-­nam­e>
k scale rs <re­pli­cas­et-­nam­e> --repl­icas=6
k edit rs <re­pli­cas­et-­nam­e>
k describe rs <re­pli­cas­et-­nam­e>
k delete po <rs­-po­-na­me>
Delete all pods under rs
k get rs <rs­-na­me> -o yaml > rs.yaml
# Editing RS
- Either delete and re-create the ReplicaSet or
- Update the existing ReplicaSet and then delete all PODs, so new ones with the correct image will be created.

The value for labels in spec.s­elector clause and spec.t­emp­lat­e.m­etadata should match in replic­aset.


k get cm
k describe cm <cm­-na­me>
to check key-value pairs in config map
k create cm <cm­-na­me> --from­-fi­le=­<path to file>
colon or equals to as delimiter between keys and values
k create cm <cm­-na­me> --from­-fi­le=­<di­rec­tor­y>
k create cm <cm­-na­me> --from­-li­ter­al=­<ke­y1>­=<v­alu­e1> --from­-li­ter­al=­<ke­y2>­=<v­alu­e2>
# Properties to remember:
config­map­keyref / env, config­MapRef / envFrom, volume

# Pods can consume ConfigMaps as enviro­nment variables or as config­uration files in a volume mounted on one or more of its containers for the applic­ation to read.
When ConfigMap is created from a file (k create cm <cm name> --from­-file= ) and when that ConfigMap is mounted as volume,
then the entire file is available at mount point for the pod.

# Injected into the Pod.


k get secrets
k get secret --all-­nam­espaces
k describe secret <se­cre­t-n­ame>
This shows the attributes in secret but hides the values.
k get secret <se­cre­t-n­ame> -o yaml
To view the values (encoded).
k create secret generic <se­cre­t-n­ame> --from­-li­ter­al=­<ke­y1>­=<v­alu­e1> --from­-li­ter­al=­<ke­y2>­=<v­alu­e2>
k create secret generic <se­cre­t-n­ame> --from­-fi­le=­<path to file>
echo -n 'string' | base64
echo -n 'encoded string' | base64 --decode
Properties to remember: secret­keyref / env, secretref / envFrom, volume

A secret can be injected into a pod as file in a volume mounted on one or more of its containers or as container enviro­nment variables.

While creating secret with the declar­ative approach (yaml), you must specify the secret key and value in encoded format.
When we create secret using imperative approach, secret keys and values are encoded on their own (and decoded as well).

Taints (on nodes) and Tolera­tions (on pods)

k taint nodes [node_­name] <ke­y>=­<va­lue­>:[­NoS­che­dul­e|N­oEx­ecu­te|­Pre­fer­NoS­che­dule]
Taint effect defines what happens to pods that do not tolerate this taint.
k taint no node01 spray=­mor­tei­n:N­oSc­hedule
k describe nodes node01 | grep -i "­tai­nt"
To check taints on a node
The property tolera­tions under spec has properties like key, operator, value and effect and their values come inside "­"

Remove the taint on master, which currently has the taint effect of NoSchedule
$ k taint no master node-r­ole.ku­ber­net­es.i­o/­mas­ter­:No­Sch­edule-

Remove from node 'foo' all the taints with key 'dedic­ated'
$ k taint no foo dedicated-

Horizontal Pod Auto Scaler

k get hpa
k delete hpa <hp­a-n­ame>
k autoscale deploy nginx --min=5 --max=10 --cpu-­per­cent=80


k top no
k top po
k top pod --name­spa­ce=­default | head -2 | tail -1 | cut -d " " -f1
k top po --sort-by cpu --no-h­eaders


k get events
k get events -n kube-s­ystem
k get events -w


k get ds
k get ds --all-­nam­espaces
k describe ds <ds­-na­me> -n <ns­-na­me>
k describe ds <ds­-na­me> -o yaml

Enviro­nment Variables

k run nginx --imag­e=nginx --env=­app=web
# Create an nginx pod and set an enviro­nment variable
env and envFrom property is an array.
env property takes two properties - name and value. value takes only string and will always come in double quotes.


k apply -f <sv­c.y­aml>
k get svc
k get svc --show­-labels
k get svc -o wide
k get svc -o yaml
k describe svc <se­rvi­ce-­nam­e>
To get to know about port, target port etc.
k delete svc <se­rvi­ce-­nam­e>


k get no
k get no -o wide
k describe no <no­de-­nam­e>

Network Policy

k get netpol
k describe netpol <na­me>
Note: While creating network policy, make sure that not only network policy is applied to the correct object but also that it allows access from (ingress) / to correct object (egress).

labels and selectors are used.

An empty podSel­ector selects all pods in the namespace.


k annotate po nginx desc="Hello World"
k annotate po nginx author­=Avnish
k annotate po nginx desc-
Remove this annotation from the pod
k annotate no <no­de-­nam­e>

Labels & Selectors

k get [pod|d­epl­oy|all] --show­-labels
Show labels
k get label [node|­pod­|de­plo­y|etc] <ke­y>=­<va­lue>
k label pod nginx env=lab
To label pod
k label deploy my-webapp tier=f­rontend
To label deployment
k label node node01 size=large
To label nodes
k label po nginx <ke­y>-
Remove the label
k label po nginx env=lab1 --over­write
To overwrite a label
k get po --sele­cto­r=a­pp=App1
k get po --sele­cto­r=a­pp!­=App1
k get all --sele­cto­r=e­nv=prod
k get po --sele­cto­r=e­nv=­pro­d,b­u=f­ina­nce­,ti­er=­fro­ntend
Equivalent of && in progra­mming languages


k get pv
k describe pv
k delete pv <pv­-na­me>
k get pvc
k describe pvc
k delete pvc <pv­c-n­ame>


k get ingress
k describe ingress <in­gress name>
k edit ingress <in­gress name>
k apply -f <in­gre­ss.y­am­l>


k logs -f <po­d-n­ame> <co­nta­ine­r-n­ame>
Follow the logs
k logs <po­d-n­ame> --previous
Dump pod logs for a previous instan­tiation of a container
k logs --tail=20 <po­d-n­ame>


k create deploy nginx --imag­e=nginx --repl­icas=2
k get deploy
k get deploy -n <na­mes­pac­e-n­ame>
k get deploy <de­plo­yme­nt-­nam­e>
k get deploy <de­plo­yme­nt-­nam­e> -o yaml | more
k get deploy -o wide
k describe deploy <de­plo­yme­nt-­nam­e>
k apply -f deploy.yaml
k apply -f deploy.yaml --record
Record the change­-cause in revision history
k rollout status deploy <de­plo­yme­nt-­nam­e>
k rollout history deploy <de­plo­yme­nt-­nam­e>
k rollout history deploy nginx --revi­sion=2 # to get detailed history for a specific revision.
k rollout undo deploy <de­plo­yme­nt-­nam­e>
k rollout undo deploy <de­plo­yme­nt-­nam­e> --to-r­evi­sion=1
k rollout pause deploy <de­plo­yme­nt-­nam­e>
k rollout resume deploy <de­plo­yme­nt-­nam­e>
k delete deploy <de­plo­yme­nt-­nam­e>
k create deploy nginx --imag­e=nginx --dry-­run­=client -o yaml > deploy.yaml
k scale deploy <de­plo­yme­nt-­nam­e> --repl­icas=3 # To scale up / down a deploy­ment. Not recorded in revision history.
k create deploy <de­plo­yme­nt-­nam­e> --imag­e=redis -n <na­mes­pac­e-n­ame>
k edit deploy <de­plo­yme­nt-­nam­e> -n <na­mes­pac­e-n­ame>
k expose deploy <de­plo­yme­nt-­nam­e> --port=80 --type­=No­deP­ort­|Cl­usterIp
k autoscale deployment <de­plo­yme­nt-­nam­e> --max 6 --min 3 --cpu-­percent 50
Autoscale deployment
Deploy­ments can be paused and resumed. When paused, no changes are recorded to revision history.

Rollin­gUp­dat­eSt­rategy in define upto how many pods can be down/up during the update at a time.

updating images or new deploy­ements, triggers rollout. A new rollout creates a new deployment revision.

Service Account

k create sa <sa­-na­me>
k get sa
k get sa <sa­-na­me> -o yaml
k describe sa <sa­-na­me>
Fetch token: gives secret name
k describe secret <se­cre­t-n­ame>
Fetch token: gives token stored in secret
k run nginx --imag­e=nginx --serv­ice­acc­oun­t=m­yuser --dry-­run­=client -o yaml > pod.yaml
nginx pod that uses 'myuser' as a service account
When we use service account inside the pod, the secret for that service account is mounted as volume inside the pod.

Property to remember: spec -> servic­eAc­cou­ntName # set at pod level
Injected into the Pod.
For a deploy­ment, can set service account in pod template.

A user makes a request to API server through k using user account.
A process running inside a container makes a request to API server using service account.
A service account just like user account has certain permis­sions.


k get cj
k create cj busybox --imag­e=b­usybox --sche­dul­e="/1 *" -- /bin/sh -c "­date; echo Hello from Kubernetes cluste­r"
cron job with image busybox that runs on a schedule and writes to standard output
In a cronjob, there are 2 templates - one for job and another for pod.

In a cronjob, there are 3 spec sections - one for cronjob, one for job and one for pod (in order).

Properties to remember: spec -> succes­sfu­lJo­bHi­sto­ryL­imit, spec -> failed­Job­His­tor­yLimit


k create job busybox --imag­e=b­usybox -- /bin/sh -c "echo hello;­sleep 30;echo world"
k get jobs
k logs busybo­x-qhcnx
pod under job
k delete job <jo­b-n­ame>
restar­tPolicy defaults to Never.

Job has 2 spec sections - one for job and one for pod (in order).

restar­tPolicy has to be OnFailure or Never. If the restar­tPolicy is OnFailure, a failed container will be re-run on the same pod. If the restar­tPolicy is Never, a failed container will be re-run on a new pod.

Job properties to remember: comple­tions, backof­fLimit, parall­elism, active­Dea­dli­neS­econds, restar­tPo­licy.

By default Pods in a job are created in sequence


k get ns
k get ns -o yaml
k create ns <na­mes­pac­e-n­ame>
k apply -f <na­mes­pac­e.y­aml>
k get po -n kube-s­ystem
k describe ns <na­mes­pac­e-n­ame>
k delete ns <na­mes­pac­e-n­ame>
k config set-co­ntext $(k config curren­t-c­ontext) --name­spa­ce=dev
To switch to a namespace perman­ently
k run redis --imag­e=redis -n <na­mes­pac­e-n­ame>
Create/run in a specific ns
k exec -it <po­d-n­ame> -n <na­mes­pac­e-n­ame> -- sh
k create ns <na­mes­pac­e-n­ame> --dry-­run­=client -o yaml