From 0d2e3d93d1a44fae5ce7759e74296f0fe4d61345 Mon Sep 17 00:00:00 2001 From: rsavchuk Date: Thu, 8 Jun 2023 22:25:56 +0200 Subject: [PATCH] Update spelling --- docs/01-container-runtime.md | 36 +++++----- docs/02-kubelet.md | 12 ++-- docs/03-pod-networking.md | 120 ++++++++++++++++------------------ docs/04-etcd.md | 79 ++++++++++++---------- docs/05-apiserver.md | 74 ++++++++++----------- docs/06-apiserver-kubelet.md | 9 +-- docs/07-scheduler.md | 16 +++++ docs/08-controller-manager.md | 10 +++ docs/09-kubeproxy.md | 10 +-- 9 files changed, 200 insertions(+), 166 deletions(-) diff --git a/docs/01-container-runtime.md b/docs/01-container-runtime.md index 66b3495..2e1b12c 100644 --- a/docs/01-container-runtime.md +++ b/docs/01-container-runtime.md @@ -28,9 +28,9 @@ After the download process is complete, we need to move runc binaries to proper ```bash { - sudo mv runc.amd64 runc - chmod +x runc - sudo mv runc /usr/local/bin/ + sudo mv runc.amd64 runc + chmod +x runc + sudo mv runc /usr/local/bin/ } ``` @@ -38,14 +38,14 @@ Now, as we have runc configured, we can run busybox container ```bash { -mkdir -p ~/busybox-container/rootfs/bin -cd ~/busybox-container/rootfs/bin -wget https://www.busybox.net/downloads/binaries/1.31.0-defconfig-multiarch-musl/busybox-x86_64 -chmod +x busybox-x86_64 -./busybox-x86_64 --install . -cd ~/busybox-container -runc spec -sed -i 's/"sh"/"echo","Hello from container runned by runc!"/' config.json + mkdir -p ~/busybox-container/rootfs/bin + cd ~/busybox-container/rootfs/bin + wget https://www.busybox.net/downloads/binaries/1.31.0-defconfig-multiarch-musl/busybox-x86_64 + chmod +x busybox-x86_64 + ./busybox-x86_64 --install . + cd ~/busybox-container + runc spec + sed -i 's/"sh"/"echo","Hello from container runned by runc!"/' config.json } ``` @@ -63,8 +63,8 @@ Hello from container runned by runc! Great, we created our first container in this tutorial. Now we will clean up our workspace. ```bash { -cd ~ -rm -r busybox-container + cd ~ + rm -r busybox-container } ``` @@ -88,9 +88,9 @@ After download process complete, we need to unzip and move containerd binaries t ```bash { - mkdir containerd - tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd - sudo mv containerd/bin/* /bin/ + mkdir containerd + tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd + sudo mv containerd/bin/* /bin/ } ``` @@ -233,8 +233,8 @@ ctr containers rm busybox-container We can check that list of containers and tasks should be empty ```bash { -ctr task ls -ctr containers ls + ctr task ls + ctr containers ls } ``` diff --git a/docs/02-kubelet.md b/docs/02-kubelet.md index 0872d1f..ded3c9a 100644 --- a/docs/02-kubelet.md +++ b/docs/02-kubelet.md @@ -16,6 +16,8 @@ Previously we worked with containers, but on this step, we will work with other As you remember at the end, kubernetes usually start pods. So now we will try to create it. But it is a bit not the usual way, instead of using kubernetes api (which we didn't configure yet), we will create pods with the usage of kubelet only. To do that we will use the [static pods](https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/) functionality. +## configure + So, let's begin. First of all, we need to download kubelet. @@ -91,14 +93,16 @@ Output: ... ``` +## verify + After kubelet service is up and running, we can start creating our pods. Before we will create static pod manifests, we need to create folders where we will place our pods (same as we configured in kubelet) ```bash { -mkdir /etc/kubernetes -mkdir /etc/kubernetes/manifests + mkdir /etc/kubernetes + mkdir /etc/kubernetes/manifests } ``` @@ -250,8 +254,8 @@ rm /etc/kubernetes/manifests/static-pod.yml It takes some time to remove the pods, we can ensure that pods are deleted by running ```bash { -crictl pods -crictl ps + crictl pods + crictl ps } ``` diff --git a/docs/03-pod-networking.md b/docs/03-pod-networking.md index f49c409..08436e7 100644 --- a/docs/03-pod-networking.md +++ b/docs/03-pod-networking.md @@ -4,7 +4,7 @@ Now, we know how kubelet runs containers and we know how to run pod without othe Let's experiment with static pod a bit. -We will create static pod, but this time we will run nginx, instead of busybox +We will create a static pod, but this time we will run nginx, instead of busybox ```bash cat < /etc/kubernetes/manifests/static-nginx.yml apiVersion: v1 @@ -21,7 +21,7 @@ spec: EOF ``` -After manifest created we can check wheather our nginx container is created +After the manifest is created we can check whether our nginx container is created ```bash crictl pods @@ -33,8 +33,8 @@ POD ID CREATED STATE NAME 14662195d6829 About a minute ago Ready static-nginx-example-server default 0 (default) ``` -As we can see out nginx container is up and running. -Let's check wheather it works as expected. +As we can see our nginx container is up and running. +Let's check whether it works as expected. ```bash curl localhost @@ -67,7 +67,7 @@ Commercial support is available at ``` -Now, lets try to create 1 more nginx container. +Now, let's try to create 1 more Nginx container. ```bash cat < /etc/kubernetes/manifests/static-nginx-2.yml apiVersion: v1 @@ -84,7 +84,7 @@ spec: EOF ``` -Again will try to check if our pod is in running state. +Again will try to check if our pod is in a running state ```bash crictl pods @@ -97,13 +97,13 @@ a299a86893e28 40 seconds ago Ready static-nginx-2-examp 14662195d6829 4 minutes ago Ready static-nginx-example-server default 0 (default) ``` -Looks like out pod is up, but if we will try to check the underlying containers we may be surprised. - +Looks like our pod is up, but if we will try to check the underlying containers we may be surprised. ```bash crictl ps -a ``` +Output: ``` CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 9e8cb98b87aed 6efc10a0510f1 42 seconds ago Exited nginx 3 b013eca0e9d33 @@ -111,13 +111,13 @@ CONTAINER IMAGE CREATED STATE ``` As you can see our second container is in exit state. -To check the reason of the Exit state we can review container logs +To check the reason for the exit state we can review the container logs ```bash crictl logs $(crictl ps -q -s Exited) ``` -In the logs, you shoud see something like this +Output: ``` ... 2023/04/18 20:49:47 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use) @@ -125,11 +125,10 @@ nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) ... ``` -As we can see, the reason of the exit state - adress already in use. -Our address already in use by our other container. +As we can see, the reason for the exit state - the address is already in use. +The Nginx container tries to use the port that is already in use by another (first) Nginx other container. -We received this error because we run two pods which require an access to the same port on our server. -This was done by specifying +We received this error because we run two Nginx applications that use the same host. That was done by specifying ``` ... spec: @@ -137,9 +136,9 @@ spec: ... ``` -This option runs our container on our host without any network isolation (almost the same as running two nginx without on the same host without containers) +This option says kubelet that containers created should be run on the host without any network isolation (almost the same as running two nginx on the same host without containers) -Now we will try to update our pod manifests to run our containers in separate network "namespaces" +Now we will try to update our pod manifests to run containers in separate network namespaces ```bash { cat < /etc/kubernetes/manifests/static-nginx.yml @@ -170,9 +169,9 @@ EOF } ``` -As you can see we simply removed hostNetwork: true configuration option. +As you can see we removed the "hostNetwork: true" configuration option. -So, lets check what we have +So, let's check what we have ```bash crictl pods ``` @@ -182,8 +181,8 @@ Output: POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME ``` -Very strange, we see nothing. -To define the reason why no pods created lets review the kubelet logs (but as we know what we are looking for, we will chit a bit) +We see nothing. +To define the reason why no pods were created let's review the logs ```bash journalctl -u kubelet | grep NetworkNotReady ``` @@ -195,25 +194,24 @@ May 03 13:43:43 example-server kubelet[23701]: I0503 13:43:43.862719 23701 eve ... ``` -As we can see cni plugin is not initialized. But what is cni plugin. +As we can see cni plugin is not initialized. But what is cni plugin? > CNI stands for Container Networking Interface. It is a standard for defining how network connectivity is established and managed between containers, as well as between containers and the host system in a container runtime environment. Kubernetes uses CNI plugins to implement networking for pods. > A CNI plugin is a binary executable that is responsible for configuring the network interfaces and routes of a container or pod. It communicates with the container runtime (such as Docker or CRI-O) to set up networking for the container or pod. -As we can see kubelet can't configure network for pod by himself (or with the help of containerd). Same as with containers, to configure network kubelet use some 'protocol' to communicate with 'someone' who can configure networ. +As we can see kubelet can't configure the network for a pod by himself (or with the help of containerd). Same as with containers, to configure a network kubelet uses some 'protocol' to communicate with 'someone' who can configure a network. -Now, we will configure the cni plugin for our kubelet. +Now, we will configure the cni plugin. -First of all we need to download that plugin +First of all, we need to download that plugin ```bash wget -q --show-progress --https-only --timestamping \ https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz ``` -Now, we will create proper folders structure for our plugin - +Now, we will create proper folders structure ```bash sudo mkdir -p \ /etc/cni/net.d \ @@ -221,17 +219,16 @@ sudo mkdir -p \ ``` here: -- net.d - folder where we will store our plugin configuration files +- net.d - folder where plugin configuration files stored - bin - folder for plugin binaries -Now, we will untar our plugin to proper folder +Now, we will untar the plugin to the proper folder ```bash sudo tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/ ``` And create plugin configuration - ```bash { cat < ``` -Now, after all fixes applyed and we have working kubelet, we can check wheather our pods created +Now, after all fixes applied and we have a working kubelet, we can check whether the pods created ```bash crictl pods ``` @@ -332,7 +329,6 @@ b9c684fa20082 2 minutes ago Ready static-nginx-example ``` Pods are ok, but what about containers - ```bash crictl ps ``` @@ -346,15 +342,15 @@ CONTAINER IMAGE CREATED STATE They are also in running state -On this step if we will try to curl localhost nothing will happen. -Our pods are runned in separate network namespaces, and each pod has its own ip address. +In this step, if we will try to curl localhost, nothing will happen. +Our pods are run in separate network namespaces, and each pod has its own IP address. We need to define it. ```bash { -PID=$(crictl pods --label app=static-nginx-2 -q) -CID=$(crictl ps -q --pod $PID) -crictl exec $CID ip a + PID=$(crictl pods --label app=static-nginx-2 -q) + CID=$(crictl ps -q --pod $PID) + crictl exec $CID ip a } ``` @@ -370,15 +366,15 @@ Output: ... ``` -During the plugin configuration we remember that we configure the subnet pod our pods to be 10.240.1.0/24. So, the container received its IP from the range specified, in my case it was 10.240.1.1. +During plugin configuration, we remember that we configure the pod's subnet to 10.240.1.0/24. So, the container received its IP from the range specified, in my case, it was 10.240.1.1. -So, lets try to curl our container. +So, let's try to curl the container. ```bash { -PID=$(crictl pods --label app=static-nginx-2 -q) -CID=$(crictl ps -q --pod $PID) -IP=$(crictl exec $CID ip a | grep 240 | awk '{print $2}' | cut -f1 -d'/') -curl $IP + PID=$(crictl pods --label app=static-nginx-2 -q) + CID=$(crictl ps -q --pod $PID) + IP=$(crictl exec $CID ip a | grep 240 | awk '{print $2}' | cut -f1 -d'/') + curl $IP } ``` @@ -409,13 +405,12 @@ Commercial support is available at ``` -As we can see we successfully reached out container from our host. +As we can see we successfully reached out container from the host. -But we remember that cni plugin also responsible to configure communication between containers. -Lets check +But we remember that cni plugin is also responsible to configure communication between containers. +Let's check To do that we will run 1 more pod with busybox inside - ```bash cat < /etc/kubernetes/manifests/static-pod.yml apiVersion: v1 @@ -433,7 +428,7 @@ spec: EOF ``` -Now, lets, check and ensure that pod created +Now, let's check and ensure that the pod created ```bash crictl pods @@ -447,16 +442,16 @@ a6881b7bba036 18 minutes ago Ready static-nginx-example 4dd70fb8f5f53 18 minutes ago Ready static-nginx-2-example-server default 0 (default) ``` -As pod is in running state, we can check wheather our other nging pods are available +As the pod is in a running state, we can check whether the other nginx pod are available ```bash { -PID=$(crictl pods --label app=static-nginx-2 -q) -CID=$(crictl ps -q --pod $PID) -IP=$(crictl exec $CID ip a | grep 240 | awk '{print $2}' | cut -f1 -d'/') -PID_0=$(crictl pods --label app=static-pod -q) -CID_0=$(crictl ps -q --pod $PID_0) -crictl exec $CID_0 wget -O - $IP + PID=$(crictl pods --label app=static-nginx-2 -q) + CID=$(crictl ps -q --pod $PID) + IP=$(crictl exec $CID ip a | grep 240 | awk '{print $2}' | cut -f1 -d'/') + PID_0=$(crictl pods --label app=static-pod -q) + CID_0=$(crictl ps -q --pod $PID_0) + crictl exec $CID_0 wget -O - $IP } ``` @@ -493,15 +488,14 @@ written to stdout As we can see we successfully reached our container from busybox. -In this section we configured CNI plugin for our intallation and now we can run pods which can communicate with each other over the network. +In this section, we configured the cni plugin. Now we can run pods that can communicate with each other over the network. -In nest section we will procede with the kubernetes cluster configuration, but before, we need to clean up workspace. +Now we clean up the workspace ```bash rm /etc/kubernetes/manifests/static-* ``` And check if app pods are removed - ```bash crictl pods ``` @@ -511,4 +505,6 @@ Output: POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME ``` +Note: it takes some time to remove all created resources. + Next: [ETCD](./04-etcd.md) \ No newline at end of file diff --git a/docs/04-etcd.md b/docs/04-etcd.md index 4c537e4..4b15fba 100644 --- a/docs/04-etcd.md +++ b/docs/04-etcd.md @@ -1,36 +1,41 @@ # ETCD -At this point we already know that we can run pods even withour API server. But current aproach is not very confortable to use, to create pod we need to place some manifest in some place. It is not very comfortable to manage. Now we will start our jorney of configuring "real" (more real than current, because current doesn't look like kubernetes at all) kubernetes. And of course we need to start with the storage. +At this point, we already know that we can run pods even without an API server. To create a pod we need to place some manifest in some place. It is not very comfortable to manage. Now we will start configuring "real" (more real than current, because current doesn't look like kubernetes at all) kubernetes cluster. ![image](./img/04_cluster_architecture_etcd.png "Kubelet") -For kubernetes (at least for original one if I can say so) we need to configura database called [etcd](https://etcd.io/). +For kubernetes (at least for the original one if I can say so) we need to configure a database called [etcd](https://etcd.io/). >etcd is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines. It gracefully handles leader elections during network partitions and can tolerate machine failure, even in the leader node. -Our etcd will be configured as single node database with authentication (by useage of client cert file). +Our etcd will be configured as a single node database with authentication. -So, lets start. +So, let's start. -As I already said, communication with our etcd cluster will be secured, it means that we need to generate some keys, to encrypt all the trafic. -To do so, we need to download tools which may help us to generate certificates +## certificates + +We will configure etcd to authenticate users by the certificate file used during communication. +To do so, we need to generate some certs. +We will create certificate files using cfssl and cfssljson tools (that should be installed before we start) + +First of all, we will download the tools mentioned ```bash wget -q --show-progress --https-only --timestamping \ https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64 \ https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64 ``` -And install +And install them ```bash { -mv cfssl_1.4.1_linux_amd64 cfssl -mv cfssljson_1.4.1_linux_amd64 cfssljson -chmod +x cfssl cfssljson -sudo mv cfssl cfssljson /usr/local/bin/ + mv cfssl_1.4.1_linux_amd64 cfssl + mv cfssljson_1.4.1_linux_amd64 cfssljson + chmod +x cfssl cfssljson + sudo mv cfssl cfssljson /usr/local/bin/ } ``` -After the tools installed successfully, we need to generate ca certificate. +After the tools are installed successfully, we need to generate ca certificate. A ca (Certificate Authority) certificate, also known as a root certificate or a trusted root certificate, is a digital certificate that is used to verify the authenticity of other certificates. ```bash @@ -125,21 +130,7 @@ kubernetes-key.pem kubernetes.pem ``` -Now, we have all required certificates, so, lets download etcd -```bash -wget -q --show-progress --https-only --timestamping \ - "https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz" -``` - -After donload complete, we can move etcd binaries to proper folders -```bash -{ - tar -xvf etcd-v3.4.15-linux-amd64.tar.gz - sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/ -} -``` - -Now, we can start wioth the configurations of the etcd service. First of all, we need to discribute previuosly generated certificates to the proper folder +And distribute certificate files created ```bash { sudo mkdir -p /etc/etcd /var/lib/etcd @@ -150,7 +141,23 @@ Now, we can start wioth the configurations of the etcd service. First of all, we } ``` -Create etcd service configuration file +## configure + +Now, we have all the required certificates, so, let's download etcd +```bash +wget -q --show-progress --https-only --timestamping \ + "https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz" +``` + +After the download is complete, we can move etcd binaries to the proper folders +```bash +{ + tar -xvf etcd-v3.4.15-linux-amd64.tar.gz + sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/ +} +``` + +Now, we can configure etcd service ```bash cat < The Kubernetes API server validates and configures data for the api objects which include pods, services, replicationcontrollers, and others. The API Server services REST operations and provides the frontend to the cluster's shared state through which all other components interact. -As you can see from the description adpi server is central (not main) component of kubernetes cluster. +As you can see from the description, api server is a central (not the main) component of kubernetes cluster. ![image](./img/05_cluster_architecture_apiserver.png "Kubelet") ## certificates -Before we begin with configuration of API server, we need to create certificates for kubernetes that will be used to sign service account tokens. - +Before we begin with the configuration of the api server, we need to create certificates for kubernetes that will be used to sign service account tokens. ```bash { cat > service-account-csr.json < admin-csr.json < As you can see, our pod still in pending mode. To define the reason of this, we will review the logs of our scheduler. + ```bash +journalctl -u kube-scheduler | grep not-ready +``` + +Output: +``` ... May 21 20:52:25 example-server kube-scheduler[91664]: I0521 20:52:25.471604 91664 factory.go:338] "Unable to schedule pod; no fit; waiting" pod="default/hello-world" err="0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate." ... @@ -261,4 +267,14 @@ Now we need to clean-up our wirkspace kubectl delete -f pod.yaml ``` +Check if pod deleted +```bash +kubectl get pod +``` + +Outpput: +``` +No resources found in default namespace. +``` + Next: [Controller manager](./08-controller-manager.md ) \ No newline at end of file diff --git a/docs/08-controller-manager.md b/docs/08-controller-manager.md index 3d6d844..c11818d 100644 --- a/docs/08-controller-manager.md +++ b/docs/08-controller-manager.md @@ -237,4 +237,14 @@ Now, when our controller manager configured, lets clean up our workspace. kubectl delete -f deployment.yaml ``` +Check if pod created by deployment deleted +```bash +kubectl get pod +``` + +Outpput: +``` +No resources found in default namespace. +``` + Next: [Kube-proxy](./09-kubeproxy.md) \ No newline at end of file diff --git a/docs/09-kubeproxy.md b/docs/09-kubeproxy.md index 5aaf1ea..6774f21 100644 --- a/docs/09-kubeproxy.md +++ b/docs/09-kubeproxy.md @@ -142,6 +142,8 @@ writing to stdout written to stdout ``` +Note: usually, it takes some time to apply all RBAC policies + Note: it take some time to apply user permission. During this you can steel see permission error. As you can see, we successfully received the response from the nginx. But to do that we used the IP address of the pod. To solve service discovery issue, kubernetes has special component - service. Now we will create it. @@ -256,8 +258,8 @@ Now, we can distribute created configuration file. ```bash { -sudo mkdir -p /var/lib/kube-proxy -sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig + sudo mkdir -p /var/lib/kube-proxy + sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig } ``` @@ -271,8 +273,8 @@ wget -q --show-progress --https-only --timestamping \ And install it ```bash { - chmod +x kube-proxy - sudo mv kube-proxy /usr/local/bin/ + chmod +x kube-proxy + sudo mv kube-proxy /usr/local/bin/ } ```