diff --git a/README.md b/README.md
index d9db690..563633e 100644
--- a/README.md
+++ b/README.md
@@ -10,7 +10,7 @@ To configure the cluster mentioned, we will use Ubuntu server 20.04 (author uses

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (whatever it means).
-# Labs
+## Labs
* [Cluster architecture](./docs/00-kubernetes-architecture.md)
* [Container runtime](./docs/01-container-runtime.md)
@@ -19,7 +19,7 @@ To configure the cluster mentioned, we will use Ubuntu server 20.04 (author uses
* [ETCD](./docs/04-etcd.md)
* [Api Server](./docs/05-apiserver.md)
* [Apiserver - Kubelet integration](./docs/06-apiserver-kubelet.md)
-* [Controller manager](./docs/07-controller-manager.md)
-* [Scheduler](./docs/08-scheduler.md)
-* [Kube proxy](./docs/09-kubeproxy.md)
+* [Scheduler](./docs/07-scheduler.md)
+* [Controller manager](./docs/08-controller-manager.md)
+* [Kube-proxy](./docs/09-kubeproxy.md)
* [DNS in Kubernetes](./docs/10-dns.md)
diff --git a/docs/01-container-runtime.md b/docs/01-container-runtime.md
index a20cce4..0e02e2d 100644
--- a/docs/01-container-runtime.md
+++ b/docs/01-container-runtime.md
@@ -1,32 +1,30 @@
# Container runtime
-In this part of our tutorial we will focus of the container runtime.
+In this section, we will focus on the container runtime, as it is part of the Kubernetes which is responsible for running containers.

-Firt of all, container runtime is a tool which can be used by other kubernetes components (kubelet) to manage containers. In case if we have two parts of the system which communicate - we need to have some specification. Nad tehre is the cpecification - CRI.
-
-> The CRI is a plugin interface which enables the kubelet to use a wide variety of container runtimes, without having a need to recompile the cluster components.
-
-In this tutorial we will use [containerd](https://github.com/containerd/containerd) as tool for managing the containers on the node.
-
-On other hand there is a project under the Linux Foundation - OCI.
-> The OCI is a project under the Linux Foundation is aims to develop open industry standards for container formats and runtimes. The primary goal of OCI is to ensure container portability and interoperability across different platforms and container runtime implementations. The OCI has two main specifications, Runtime Specification (runtime-spec) and Image Specification (image-spec).
-
-In this tutorial we will use [runc](https://github.com/opencontainers/runc) as tool for running containers.
-
-Now, we can start with the configuration.
-
## runc
-Lets download runc binaries
+First of all, since Kubernetes is an orchestrator for containers, we would like to figure out how to run containers.
+A thing like OCI can help us here.
+
+> The OCI is a project under the Linux Foundation is aims to develop open industry standards for container formats and runtimes. The primary goal of OCI is to ensure container portability and interoperability across different platforms and container runtime implementations. The OCI has two main specifications, Runtime Specification (runtime-spec) and Image Specification (image-spec).
+
+As we can see from the description - OCI is a standard that tells us what is a container image and how to run it.
+
+But it is only a standard, obviously there is some tool that implements this standard. And it is true, runc is a reference implementation of the OCI runtime specification.
+
+So let's install it and run some container with the usage of runc
+
+First of all we need to download runc binaries
```bash
wget -q --show-progress --https-only --timestamping \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64
```
-As download process complete, we need to move runc binaries to bin folder
+After download process complete, we need to move runc binaries to proper folder
```bash
{
@@ -36,7 +34,7 @@ As download process complete, we need to move runc binaries to bin folder
}
```
-Now, as we have runc installed, we can run busybox container
+Now, as we have runc configured, we can run busybox container
```bash
{
@@ -51,7 +49,7 @@ sed -i 's/"sh"/"echo","Hello from container runned by runc!"/' config.json
}
```
-Now, we created all proper files, required by runc to run the container (including container confguration and files which will be accesible from container).
+On this step we downloaded the busybox immage, unarchived it and created proper files, required by runc to run the container (including container confguration and files which will be accesible from container). So, lets run our container
```bash
runc run busybox
@@ -62,7 +60,7 @@ Output:
Hello from container runned by runc!
```
-Great, everything works, now we need to clean up our workspace
+Great, we create our first container in this tutorial. Now we will clean up our workspace.
```bash
{
cd ~
@@ -72,16 +70,21 @@ rm -r busybox-container
## containerd
-As already mentioned, Container Runtime: The software responsible for running and managing containers on the worker nodes. The container runtime is responsible for pulling images, creating containers, and managing container lifecycles
+As we can see, runc can run containers, but runc interface is something unknown for kubernetes.
-Now, let's download containerd.
+There is another standert defined which is used by kubelet to communicate with container runtime - CRI
+> The CRI is a plugin interface which enables the kubelet to use a wide variety of container runtimes, without having a need to recompile the cluster components.
+
+In this tutorial we will use [containerd](https://github.com/containerd/containerd) as tool which is compattible with CRI.
+
+To deploy containerd, first of all we need to download it.
```bash
wget -q --show-progress --https-only --timestamping \
https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz
```
-Unzip containerd binaries to the bin directory
+After download process complete, we need to unzip and move containerd binaries to proper folder
```bash
{
@@ -91,9 +94,11 @@ Unzip containerd binaries to the bin directory
}
```
-In comparison to the runc, containerd is a service which can be called by someone to run container.
+In comparison to the runc, containerd is a service works like a service which can be called by someone to run container. It means that we need to run it, before we can start comminucate with it.
-Before we will run containerd service, we need to configure it.
+We will configure containerd as a service.
+
+To do that, we need to create containerd configuration file
```bash
{
sudo mkdir -p /etc/containerd/
@@ -112,7 +117,7 @@ EOF
As we can see, we configured containerd to use runc (we installed before) to run containers.
-Now we can configure contanerd service
+After configuration file create, we need to create containerd service
```bash
cat < An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
> The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
-So, lets set up kubelet and run some pod.
+
-First of all we need to download kubelet binary
+As we can see, in this section we will work with the next layer of kubernetes components (if I can say so).
+Previously we worked with containers, but on this step we will work with other afteraction kubernetes has - pod.
+As you remember at the end, kubernetes usually start pods. So now we will try to create it. But it a bit not usual way, instead of using kubernetes api (which we didn't configured yet), we will create pods with the usage of kubelet only.
+To do that we will use the [static pods](https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/) functionality.
+
+So, lets begin.
+
+First of all we need to download kubelet.
```bash
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
```
-Make file exacutable and move to the bin folder
-
+After download process complete, muve kubelet binaries to the proper folder
```bash
{
chmod +x kubelet
@@ -26,8 +32,7 @@ Make file exacutable and move to the bin folder
}
```
-And the last part when configuring kubelet - create service to run kubelet.
-
+As kubelet is a service which is used to manage pods running on the node, we need to configure that service
```bash
cat < Static Pods are managed directly by the kubelet daemon on a specific node, without the API server observing them.
-Before we will create static pod manifests, we need to create folders where we will place our pods (as we can see from kibelet configuration, it sould be /etc/kubernetes/manifests)
+Before we will create static pod manifests, we need to create folders where we will place our pods (same as we configured in kubelet)
```bash
{
@@ -99,8 +102,7 @@ mkdir /etc/kubernetes/manifests
}
```
-After directory created, we can create static with busybox inside
-
+After directory created, we can create static pod with busybox container inside
```bash
cat < /etc/kubernetes/manifests/static-pod.yml
apiVersion: v1
@@ -118,27 +120,68 @@ spec:
EOF
```
-We can check if containerd runned new container
+Now lets use the ctr tool we already know to list the containers created
```bash
-ctr tasks ls
+ctr containers ls
```
Output:
```bash
-TASK PID STATUS
+CONTAINER IMAGE RUNTIME
```
Looks like containerd didn't created any containrs yet?
-Of course it may be true, but baed on the output of ctr command we can't answer that question. It is not true (of course it may be true, but based on the output of the ctr command we can't confirm that ////more about that here)
+Of course it may be true, but based on the output of ctr command we can't confirm that.
-To see containers managed by kubelet lets install [crictl](http://google.com/crictl).
-Download binaries
+Containerd has namespace feature. Namespace is a mechanism used to provide isolation and separation between different sets of resources.
+
+We can check containerd namespaces by running
+```bash
+ctr namespace ls
+```
+
+Output:
+```
+NAME LABELS
+default
+k8s.io
+```
+
+Containers created by kubelet located in the k8s.io namespace, to see them run
+```bash
+ctr --namespace k8s.io containers ls
+```
+
+Output:
+```
+CONTAINER IMAGE RUNTIME
+33d2725dd9f343de6dd0d4b77161a532ae17d410b266efb31862605453eb54e0 k8s.gcr.io/pause:3.2 io.containerd.runtime.v1.linux
+e75eb4ac89f32ccfb6dc6e894cb6b4429b6dc70eba832bc6dea4dc69b03dec6e sha256:af2c3e96bcf1a80da1d9b57ec0adc29f73f773a4a115344b7e06aec982157a33 io.containerd.runtime.v1.linux
+```
+
+And to get container status we can call
+```bash
+ctr --namespace k8s.io task ls
+```
+
+Output:
+```
+TASK PID STATUS
+e75eb4ac89f32ccfb6dc6e894cb6b4429b6dc70eba832bc6dea4dc69b03dec6e 1524 RUNNING
+33d2725dd9f343de6dd0d4b77161a532ae17d410b266efb31862605453eb54e0 1472 RUNNING
+```
+
+But it is not what we expected, we expected to see container named busybox. Of course there is no majic, all anformation about pod to which this container belongs to, kubernetes containername, etc are located in the metadata on the container, and can be easilly extracted with the usage of other crt command (like this - ctr --namespace k8s.io containers info a597ed1f8dee6a43d398173754fd028c7ac481ee27e09ad4642187ed408814b4). but we want to see it in a bit more readable format, this is why, we will use different tool - [crictl](http://google.com/crictl).
+
+In Comparison to the ctr (which can work with containerd only), crictl is a tool which interracts with any CRI compliant runtime, containerd is only runtime we use in this tutorial. Also, cri ctl provide information in more "kubernetes" way (i mean it can show pods and containers with names like in kubernetes).
+
+So, lets download crictl binaries
```bash
wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz
```
-Install (move to bin folder)
+After download process complete, move crictl binaries to the proper folder
```bash
{
tar -xvf crictl-v1.21.0-linux-amd64.tar.gz
@@ -146,7 +189,8 @@ Install (move to bin folder)
sudo mv crictl /usr/local/bin/
}
```
-And configure a bit
+
+And configure it a bit
```bash
cat < /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
@@ -156,6 +200,8 @@ debug: false
EOF
```
+As already mentioned, crictl can be configured to use any CRI complient runtime, in our case we configured containerd (by providing containerd socket path).
+
And we can finaly get the list of pods running on our server
```bash
crictl pods
@@ -196,7 +242,7 @@ Hello from static pod
Great, now we can run pods on our server.
-Before we will continue, remove our pods running
+Now, lets clean up our worspace and continue with the next section
```bash
rm /etc/kubernetes/manifests/static-pod.yml
```
diff --git a/docs/03-pod-networking.md b/docs/03-pod-networking.md
index 200f273..f49c409 100644
--- a/docs/03-pod-networking.md
+++ b/docs/03-pod-networking.md
@@ -1,10 +1,10 @@
# Pod networking
-In this part of tutorial, we will have closer look at the container networking
-And lets start with nginx runned inside container.
+Now, we know how kubelet runs containers and we know how to run pod without other kubernetes cluster components.
-Create manifest for nginx static pod
+Let's experiment with static pod a bit.
+We will create static pod, but this time we will run nginx, instead of busybox
```bash
cat < /etc/kubernetes/manifests/static-nginx.yml
apiVersion: v1
@@ -68,8 +68,6 @@ Commercial support is available at
```
Now, lets try to create 1 more nginx container.
-
-
```bash
cat < /etc/kubernetes/manifests/static-nginx-2.yml
apiVersion: v1
@@ -130,7 +128,8 @@ nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
As we can see, the reason of the exit state - adress already in use.
Our address already in use by our other container.
-We received this error because we run two pods with configuration
+We received this error because we run two pods which require an access to the same port on our server.
+This was done by specifying
```
...
spec:
@@ -138,9 +137,9 @@ spec:
...
```
-As we can see our pod are runned in host network.
-Lets try to fix this by updating our manifests to run containers in not host network.
+This option runs our container on our host without any network isolation (almost the same as running two nginx without on the same host without containers)
+Now we will try to update our pod manifests to run our containers in separate network "namespaces"
```bash
{
cat < /etc/kubernetes/manifests/static-nginx.yml
@@ -171,8 +170,9 @@ EOF
}
```
-And check our pods once again
+As you can see we simply removed hostNetwork: true configuration option.
+So, lets check what we have
```bash
crictl pods
```
@@ -201,9 +201,9 @@ As we can see cni plugin is not initialized. But what is cni plugin.
> A CNI plugin is a binary executable that is responsible for configuring the network interfaces and routes of a container or pod. It communicates with the container runtime (such as Docker or CRI-O) to set up networking for the container or pod.
-As we can see kubelet can't configure network for pod by himself, same as with containers, to configure network kubelet use some 'protocol' to communicate with 'someone' who can configure networ.
+As we can see kubelet can't configure network for pod by himself (or with the help of containerd). Same as with containers, to configure network kubelet use some 'protocol' to communicate with 'someone' who can configure networ.
-Now, we will configure the cni plugin 1for our instalation.
+Now, we will configure the cni plugin for our kubelet.
First of all we need to download that plugin
@@ -262,8 +262,11 @@ EOF
}
```
-And finaly we need to update our kubelet config (add network-plugin configuration option)
+Of course all configuration options here important, but I want to highlight 2 of them:
+- ranges - information about subnets from shich ip addresses will be assigned for our pods
+- routes - information on how to route trafic between nodes, as we have single node kubernetes cluster the configuration is very easy
+Update our kubelet config (add network-plugin configuration option)
```bash
cat <
```
-Now, when we fixed everything, lets ckeck if our pods are in running state
-
+Now, after all fixes applyed and we have working kubelet, we can check wheather our pods created
```bash
crictl pods
```
@@ -347,7 +347,7 @@ CONTAINER IMAGE CREATED STATE
They are also in running state
On this step if we will try to curl localhost nothing will happen.
-Our pods are runned in separate network namespaces, and each pod has own ip address.
+Our pods are runned in separate network namespaces, and each pod has its own ip address.
We need to define it.
```bash
@@ -370,9 +370,9 @@ Output:
...
```
-During the plugin configuration we remember that we configure the subnet pod our pods to be 10.240.1.0/24.
-Now, we can curl our container.
+During the plugin configuration we remember that we configure the subnet pod our pods to be 10.240.1.0/24. So, the container received its IP from the range specified, in my case it was 10.240.1.1.
+So, lets try to curl our container.
```bash
{
PID=$(crictl pods --label app=static-nginx-2 -q)
@@ -409,7 +409,7 @@ Commercial support is available at