Studying for CKA - Start

Starting

I bought a CKA certification voucher in 2020 and postpone my study for several and different reasons. Now I decide to begin, and the voucher expires in November.

So, I decided to make some notes about the process, so I’m sorry if this page won’t appear didactic.

During my study I mixed sources, like:

First of all, need to check the previous knowledge necessary, and there are some points, showed below.

  • if you found an error or want to suggest something, feel free to comment

About CKA & The exam

The Certified Kubernetes Administrator (CKA) program was created by the Cloud Native Computing Foundation (CNCF), in collaboration with The Linux Foundation, to help develop the Kubernetes ecosystem. – CKA

The Certified Kubernetes Administrator (CKA) certification is designed to ensure that certification holders have the skills, knowledge, and competency to perform the responsibilities of Kubernetes Administrators. The CKA certification allows certified administrators to quickly establish their credibility and value in the job market, and also allowing companies to more quickly hire high-quality teams to support their growth. – Linux Foundation - FAQ

Questions are grouped as below (source: Linux Foundation - CKA ):

Domains & Competencies
Storage 10%
Troubleshooting 30%
Workloads and Scheduling 15%
Cluster Architecture,installation & configuration 25%
Services & Networking 20%

Storage
Understand storage classes, persistent volumes
Understand volume mode, access modes and reclaim policies for volumes
Understand persistent volume claims primitive
Know how to configure applications with persistent storage
Troubleshooting
Evaluate cluster and node logging
Understand how to monitor applications
Manage container stdout & stderr logs
Troubleshoot application failure
Troubleshoot cluster component failure
Troubleshoot networking
Workloads & Scheduling
Understand deployments and how to perform rolling update and rollbacks
Use ConfigMaps and Secrets to configure applications
Know how to scale applications
Understand the primitives used to create robust, self-healing, application deployments
Understand how resource limits can affect Pod scheduling
Awareness of manifest management and common templating tools
Cluster Architecture, Installation & Configuration
Manage role based access control (RBAC)
Use Kubeadm to install a basic cluster
Manage a highly-available Kubernetes cluster
Provision underlying infrastructure to deploy a Kubernetes cluster
Perform a version upgrade on a Kubernetes cluster using Kubeadm
Implement etcd backup and restore
Services & Networking
Understand host networking configuration on the cluster nodes
Understand connectivity between Pods
Understand ClusterIP, NodePort, LoadBalancer service types and endpoints
Know how to use Ingress controllers and Ingress resources
Know how to configure and use CoreDNS
Choose an appropriate container network interface plugin

The exam:
- Languages: English, Japanese and Simplified Chinese
- Online
- Duration: 2 hours
- Valid for 3 years
- Free Retake
       * as long as the voucher has been acquired at Linux Foundation, and
         not marked as SINGLE-ATTEMPT
- PDF Certificate and Digital Badge
- Kubernetes v1.21
- Exam simulator - *check conditions

Sources:

What is allow?

According to the documentation, during the exam, candidates may:

Knowledge

Some baggage necessary,

What is Kubernetes ???

According to Kubernetes Docs:

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available – Kubernetes Docs

More about:

Kubernetes, after deployed, provides a cluster. This cluster is composed by Nodes, with different and well defined tasks.

Pod - An essencial part

Pods: are the smallest deployable units of computing that you can create and manage in Kubernetes. – Kubernetes Docs - Pods

A Pod shares the context of: namespaces, cgroups and other aspects of isolation.

It’s common to be confused about Containers and Pods. On a simple way, a container run a microservices and a Pod must contain 1 or more containers.

A Pod act as a Host that provides several and diferents containers.

In general, when a Pod has multiple containers it’s because they need to work together, share resources like network, mount points (directories or storages linked).

Kubernetes manages the Pods and if it’s necessary to run more instances, Kubernetes increase the number of Pods, scaling the application horizontally, called as Replication. And, plus, Kubernetes provides auto-healing and scaling checking the workload resources (cpu, memory, etc…).

Below an example of a Pod, with a single container

apiVersion: batch/v1
kind: Job
metadata:
  # Name of pod
  name: hello
spec:
  template:
    # This is the pod template
    spec:
      containers:
      - name: hello
        # Container
        image: busybox
        command: ['sh', '-c', 'echo "Hello, Kubernetes!" && sleep 3600']
      restartPolicy: OnFailure
    # The pod template ends here

Below an example of a Pod, with a multiple container, using same directory

  • this sample uses a directory on node, ephemeral storage
apiVersion: v1
kind: Pod
metadata:
  name: two-containers
spec:
  restartPolicy: Never
  volumes:
  - name: shared-data
    emptyDir: {}
  containers:
  - name: nginx-container
    image: nginx
    volumeMounts:
    - name: shared-data
      mountPath: /usr/share/nginx/html
  - name: debian-container
    image: debian
    volumeMounts:
    - name: shared-data
      mountPath: /pod-data
    command: ["/bin/sh"]
    args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]

References for this topic:

  1. Pods and Containers
  2. What’s the difference between a pod, a cluster, and a container?

Kubernetes components

Kubernetes has a tons of components, but most of the time you are going to be working with just a handful of them – Janashia, Nana (TechWorld)

It’s true, but won’t be that easy, the certification will cover all of them, so it’s too important to understand and learn each one of them.

The previously cited Nodes has different categories:

  • Worker - host the Pods (a set of running containers)
    • Every cluster has at least one worker node
  • Control Plane - manages the worker nodes and the Pods
    • The Pod management could define which Worker Node will be use to run/instance the Pod

comments powered by Disqus