CKA

Cluster Architecture

Master: manage, plan, schedule, monitor nodes

ETCD CLUSTER

  • Nodes

  • PODs

  • Configs

  • Secrets

  • Accounts

  • Roles

  • Bindings

  • Others

#Run ETCD service on port 2379
./etcd
./etcdctl put key1 value1
./etcdctl get key1

ETCDCTL version 2 commands:

etcdctl backup
etcdctl cluster-health
etcdctl mk
etcdctl mkdir
etcdctl set

ETCDCTL version 3 commands:

etcdctl snapshot save 
etcdctl endpoint health
etcdctl get
etcdctl put

To set the right version of API set the environment variable ETCDCTL_API command:

export ETCDCTL_API=3

specify path to certificate files so that ETCDCTL can authenticate to the ETCD API Server:

--cacert /etc/kubernetes/pki/etcd/ca.crt     
--cert /etc/kubernetes/pki/etcd/server.crt     
--key /etc/kubernetes/pki/etcd/server.key

specify the ETCDCTL API version and path to certificate files:

kubectl exec etcd-master -n kube-system -- sh -c "ETCDCTL_API=3 etcdctl get / --prefix --keys-only --limit=10 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt  --key /etc/kubernetes/pki/etcd/server.key" 

kube-scheduler

Controller-Manager

watch status

Remediate situation

Node-Controller

Node Monitor Period = 5s

Node monitor grace period = 40s

Pod Eviction timeout = 5m

Worker Nodes: host application as containers

Replication-Controller

kube-apiserver

the primary management component in kubernetes

  1. Authenticate user

  2. Validate request

  3. Retrieve data

  4. Update ETCD

  5. Scheduler

  6. Kubelet

kubelet

Kube-proxy

Kube-proxy

View kube-proxy - kubeadm

kubectl get pods -n kube-system
kubectl get daemonset -n kube-system

Super-powers are granted randomly so please submit an issue if you're not happy with yours.

YAML in Kubernetes

pod-definition.yml
apiVersion:  v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
     app: myapp 
     type: ront-end
spec:
  containers:
     - name: nginx-container
       image: nginx

lab

// create a new pod with the nginx image
k run nginx --image=nginx
// update the image 
k edit pod redis

// ** import command
kubectl run redis --image=foobar123 --dry-run=client -o yaml > pod.yaml

Deployments

// deployment-definition.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  labels:
     app: myapp
     type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
    spec:
      containers:
      - name: nginx-container
        image: nginx
replicas: 3
selector:
   matchLabels:
     type: front-end

Certification Tip

Using the kubectl run command can help in generating a YAML template. And sometimes, you can even get away with just the kubectl run command without having to create a YAML file at all. For example, if you were asked to create a pod or deployment with specific name and image you can simply run the kubectl run command.

Use the below set of commands and try the previous practice tests again, but this time try to use the below commands instead of YAML files. Try to use these as much as you can going forward in all exercises

Reference (Bookmark this page for exam. It will be very handy):

https://kubernetes.io/docs/reference/kubectl/conventions/

Create an NGINX Pod

kubectl run nginx --image=nginx

Generate POD Manifest YAML file (-o yaml). Don't create it(--dry-run)

kubectl run nginx --image=nginx --dry-run=client -o yaml

Create a deployment

kubectl create deployment --image=nginx nginx

Generate Deployment YAML file (-o yaml). Don't create it(--dry-run)

kubectl create deployment --image=nginx nginx --dry-run=client -o yaml

Generate Deployment YAML file (-o yaml). Don't create it(--dry-run) with 4 Replicas (--replicas=4)

kubectl create deployment --image=nginx nginx --dry-run=client -o yaml > nginx-deployment.yaml

Save it to a file, make necessary changes to the file (for example, adding more replicas) and then create the deployment.

kubectl create -f nginx-deployment.yaml

OR

In k8s version 1.19+, we can specify the --replicas option to create a deployment with 4 replicas.

kubectl create deployment --image=nginx nginx --replicas=4 --dry-run=client -o yaml > nginx-deployment.yaml

Services

k get deployments.apps
k describe deployments.apps simple-webapp-deployment | grep -i image
k expose deployment simple-webapp-deployment --name=webapp-service --target-port=8080 --type=NodePort --
port=8080 --dry-run=client -o yaml > svc.yaml

Imperative vs Declarative

Imperative: Taxi

Declarative: Uber

Infrastructure as Code

Imperative: 1. Provision a vm by the name 'web-server'

2. Install NGINX software on it

Declarative: VM Name: web-server

Package: nginx

Port: 8080

Path: /var/www/nginx

Code: GIT Repo - X

Kubernetes

Imperative:

Create Objects

k run --image=nginx nginx

k create deployment --image=nginx nginx

k expose deployment nginx --port 80

Update Objects

k edit deployment nginx

k cale deployment nginx --replicas=5

k set image deployment nginx nginx=ninx:1.18

k create -f nginx.yaml

k replace -f nginx.yaml

k delete -f nginx.yaml

Imperative Object Configuration Files

Create Objects

k create -f nginx.yaml
// local file
nginx.yaml
apiVersion: v1
Kind: Pod

metadata:
 name: myapp-pod
 labels:
   app: myapp
   type: front-end
spec:
 containers:
 - name: nginx-container
   image: nginx

Kubernetes Memory

pod-definition

apiVersion: v1
kind: Pod

metadata:
  name: myapp-pod
  labels:
    app: myapp
    type: front-end
spec:
 containers:
 - name: nginx-container
   image: nginx:1.18
status:
  conditions:
  - lastProbeTime: null
    status: "True"
    type: Initialized  

Update Objects

kubectl edit deployment nginx
k replace -f nginx.yaml
k replace --force -f nginx.yaml

Declarative:

Create Objects

k apply -f nginx.yaml

k apply -f /path/to/config-files

Update Objects

k apply -f nginx.yaml

Exam Tips

Create Objects

k apply -f nginx.yaml

k run --image=nginx nginx

k create deployment --image=nginx nginx

k expose deployment nginx --port 80

Update objects

k apply -f nginx.yaml

k edit deployment nginx

k scale deployment nginx --replicas=5

k set image deployment nginx nginx=nginx:1.18

Certification Tips - Imperative Commands with Kubectl

While you would be working mostly the declarative way - using definition files, imperative commands can help in getting one time tasks done quickly, as well as generate a definition template easily. This would help save considerable amount of time during your exams.

Before we begin, familiarize with the two options that can come in handy while working with the below commands:

--dry-run: By default as soon as the command is run, the resource will be created. If you simply want to test your command , use the --dry-run=client option. This will not create the resource, instead, tell you whether the resource can be created and if your command is right.

-o yaml: This will output the resource definition in YAML format on screen.

Use the above two in combination to generate a resource definition file quickly, that you can then modify and create resources as required, instead of creating the files from scratch.

POD

Create an NGINX Pod

kubectl run nginx --image=nginx

Generate POD Manifest YAML file (-o yaml). Don't create it(--dry-run)

kubectl run nginx --image=nginx --dry-run=client -o yaml

Deployment

Create a deployment

kubectl create deployment --image=nginx nginx

Generate Deployment YAML file (-o yaml). Don't create it(--dry-run)

kubectl create deployment --image=nginx nginx --dry-run=client -o yaml

Generate Deployment with 4 Replicas

kubectl create deployment nginx --image=nginx --replicas=4

You can also scale a deployment using the kubectl scale command.

kubectl scale deployment nginx --replicas=4

Another way to do this is to save the YAML definition to a file and modify

kubectl create deployment nginx --image=nginx --dry-run=client -o yaml > nginx-deployment.yaml

You can then update the YAML file with the replicas or any other field before creating the deployment.

Service

Create a Service named redis-service of type ClusterIP to expose pod redis on port 6379

kubectl expose pod redis --port=6379 --name redis-service --dry-run=client -o yaml

(This will automatically use the pod's labels as selectors)

Or

kubectl create service clusterip redis --tcp=6379:6379 --dry-run=client -o yaml (This will not use the pods labels as selectors, instead it will assume selectors as app=redis. You cannot pass in selectors as an option. So it does not work very well if your pod has a different label set. So generate the file and modify the selectors before creating the service)

Create a Service named nginx of type NodePort to expose pod nginx's port 80 on port 30080 on the nodes:

kubectl expose pod nginx --type=NodePort --port=80 --name=nginx-service --dry-run=client -o yaml

(This will automatically use the pod's labels as selectors, but you cannot specify the node port. You have to generate a definition file and then add the node port in manually before creating the service with the pod.)

Or

kubectl create service nodeport nginx --tcp=80:80 --node-port=30080 --dry-run=client -o yaml

(This will not use the pods labels as selectors)

Both the above commands have their own challenges. While one of it cannot accept a selector the other cannot accept a node port. I would recommend going with the kubectl expose command. If you need to specify a node port, generate a definition file using the same command and manually input the nodeport before creating the service.

Reference:

https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands

Scheduling

How scheduling works

No Scheduler!

// pod-definition.yaml
apiVersion: v1
kind: Pod
metadata:
   name: nginx
   labels:
     name: nginx
 spec:
   containers:
   - name: nginx
     image: nginx
     ports:
       - containerPort: 8080
   nodeName: node02
   ## pod already exists, you want assign a pod to a node
 // pod-bind-definition.yaml
 apiVersion: v1
 kind: Binding
 metadata:
   name: nginx
 target:
   apiVersion: v1
   kind: Node
   name: node02

Manual scheduling (lab)

k -n kube-system get pods
//kube schedule is missing

Labels and Selectors

Labels

// pod-definition.yaml
apiVersion: v1
kind: Pod
metadata:
  name: simple-webapp
  labels:
    app: App1
    function: Front-end
spec:
  containers:
  - name: simple-webapp
    image: simple-webapp
    ports:
      - containerPort: 8080

Select

k get pods --selector app=App1

ReplicaSet

ReplicaSet

Same method use in Service

// service-definition.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: App1
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9376

Annotations

// replicaset-definition.yaml
metadata:
  name: simple-webapp
  labels:
    app: App1
    function: Front-end
  annotations:
    buildversion: 1.34

Taints and Tolerations

Taints-Node

k taint nodes node-name key=value:taint-effect

taint-effect: NoSchedule | PreferNoSchedule| NoExcute

k taint nodes node1 app=blue:NoSchedule

Tolerations-PODs

k taint nodes node1 app=blue:NoSchedule

// pod-definition.yaml
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
spec:
  containers:
  - name: nginx-container
    image: nginx
  tolerations:
  - key: "app"
    operator: "Equal"
    value: "blue"
    effect: "NoSchedule"

Taint-NoExecute

taint and toleration does not tell the pod to go to a particular node. Instead it tells the node to only accept parts with certain toleration.

If you requirment is to restrict a pod to certain nodes it is achieved through another concept called node affinity.

k describe node kubemaster|grep -i taint

Remove the taint: k taint nodes controlplane node-role.kubernetes.io/master:NoSchedule-

node/controlplane untainted

k run bee --image=nginx --restart=Never --dry-run -o yaml > bee.ayml

k explain pod --recursive|less

k explain pod --recursive | grep -A5 tolerations

k get pods -o wide

Node Selectors

pod-definition.yml

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
spec:
  containers:
  - name: data-processor
    image: data-processor
  nodeSelector:
    size: Large

Label Nodes

k label nodes <node-name> <label-key>=<label-value>

k label nodes node-1 size=Large

Node Selector-Limitations

  • Large OR Medium

  • NOT Small

Node Affinity

ensure that pods are hosted on particular Nodes

pod-definition.yml

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
spec:
  containers:
  - name: data-processor
    image: data-processor
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: size
            operator: In #NotIn #Exists
            values:
            - Large  #Small

Node Affinity Types

available:

requiredDuringSchedulingIgnoredDuringExecution

perferredDuringSchedulingIgnoreDuringExecution

DuringScheduling
DuringExecution

Type 1

Required

Ignored

Type 2

Preferred

Ignored

Type 3

Required

Required

planned:

requiredDuringSchedulingRequiredDuringExecution

Lab command:

k get nodes node01 --show-labels

Create a new deployment named 'blue' with the NGINX image and 6 replicas:

k create deployment blue --image=nginx

k scale deployment blue --replicas=6

Set Node Affinity to the deployment to place the PODs on node01 only:

k get deployments.apps blue -o yaml > blue.yaml

vi blue.yaml

add affinity part from https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/

k delete deployments.apps blue

k apply -f blue.yaml

Create a new deployment named 'red' with the NGINX image and 3 replicas, and ensure it gets placed on the master node only.

Use the label -node-role.kubernetes.io/master- set on the master node.

k create deployment red --image=nginx --dry-run -o yaml > red.yaml

change the replicas to 3 add affinity

k apply -f red.yaml

k get pods -o wide|grep red

Node affinity vs taints and tolerations

Resource Requirements and Limits

pod-definition.yaml

apiVersion: v1
kind: Pod
metadata:
  name: simple-webapp-color
  labels:
    name: simple-webapp-color
spec:
  containers:
  - name: simple-webapp-color
    image: simple-webapp-color
    ports:
      - containerPort: 8080
    resources:
      requests:
         memeory: "1Gi"
         cpu: 1

CPU 1

  • 1 AWS vCPU

  • 1 GCP Core

  • 1 Azure Core

  • 1 Hyperthread

MEM 256 Mi

1 G (Gigabyte) = 1,000,000,000 bytes

1 M (Megabyte) = 1,000,000 bytes

1 K (Kilobyte) = 1,000 bytes

1 Gi (Gibibyte) = 1,073,741,824 bytes

1 Mi (Mebibyte) = 1,048,576 bytes

1 Ki (Kibibyte) = 1,024 bytes

Disk

Resource limits

by default limits each container 1 vCPU 512Mi memory, k8s throttles the cpu so that it does not go beyound the specified limit, a container can use more memory resources than its limit.

If a pod tries to consume more meory than its limit constantly, the pod will be terminated.

Note on default resource requirements and limits

In the previous lecture, I said - "When a pod is created the containers are assigned a default CPU request of .5 and memory of 256Mi". For the POD to pick up those defaults you must have first set those as default values for request and limit by creating a LimitRange in that namespace.

apiVersion: v1kind: LimitRangemetadata:  name: mem-limit-rangespec:  limits:  - default:      memory: 512Mi    defaultRequest:      memory: 256Mi    type: Container

https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/

apiVersion: v1kind: LimitRangemetadata:  name: cpu-limit-rangespec:  limits:  - default:      cpu: 1    defaultRequest:      cpu: 0.5    type: Container

https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/

References:

https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource

Last updated

Was this helpful?