From 864e1cd8365724ce2721ceb2b8046552f741b398 Mon Sep 17 00:00:00 2001 From: rsavchuk Date: Mon, 22 May 2023 22:39:31 +0200 Subject: [PATCH] Update manuals --- README.md | 8 +- docs/01-container-runtime.md | 111 +- docs/02-kubelet.md | 96 +- docs/03-pod-networking.md | 46 +- docs/04-etcd.md | 63 +- docs/05-apiserver.md | 219 ++-- docs/06-apiserver-kubelet.md | 134 ++- docs/07-scheduler.md | 264 +++++ ...er-manager.md => 08-controller-manager.md} | 97 +- docs/08-scheduler.md | 151 --- docs/09-kubeproxy.md | 187 ++-- docs/10-dns.md | 60 +- docs/Untitled-1.md | 50 - ... => 07_cluster_architecture_scheduler.png} | Bin ...uster_architecture_controller_manager.png} | Bin docs/single-node-cluster.md | 959 ------------------ 16 files changed, 911 insertions(+), 1534 deletions(-) create mode 100644 docs/07-scheduler.md rename docs/{07-controller-manager.md => 08-controller-manager.md} (58%) delete mode 100644 docs/08-scheduler.md delete mode 100644 docs/Untitled-1.md rename docs/img/{08_cluster_architecture_scheduler.png => 07_cluster_architecture_scheduler.png} (100%) rename docs/img/{07_cluster_architecture_controller_manager.png => 08_cluster_architecture_controller_manager.png} (100%) delete mode 100644 docs/single-node-cluster.md diff --git a/README.md b/README.md index d9db690..563633e 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ To configure the cluster mentioned, we will use Ubuntu server 20.04 (author uses Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (whatever it means). -# Labs +## Labs * [Cluster architecture](./docs/00-kubernetes-architecture.md) * [Container runtime](./docs/01-container-runtime.md) @@ -19,7 +19,7 @@ To configure the cluster mentioned, we will use Ubuntu server 20.04 (author uses * [ETCD](./docs/04-etcd.md) * [Api Server](./docs/05-apiserver.md) * [Apiserver - Kubelet integration](./docs/06-apiserver-kubelet.md) -* [Controller manager](./docs/07-controller-manager.md) -* [Scheduler](./docs/08-scheduler.md) -* [Kube proxy](./docs/09-kubeproxy.md) +* [Scheduler](./docs/07-scheduler.md) +* [Controller manager](./docs/08-controller-manager.md) +* [Kube-proxy](./docs/09-kubeproxy.md) * [DNS in Kubernetes](./docs/10-dns.md) diff --git a/docs/01-container-runtime.md b/docs/01-container-runtime.md index a20cce4..0e02e2d 100644 --- a/docs/01-container-runtime.md +++ b/docs/01-container-runtime.md @@ -1,32 +1,30 @@ # Container runtime -In this part of our tutorial we will focus of the container runtime. +In this section, we will focus on the container runtime, as it is part of the Kubernetes which is responsible for running containers. ![image](./img/01_cluster_architecture_container_runtime.png "Container runtime") -Firt of all, container runtime is a tool which can be used by other kubernetes components (kubelet) to manage containers. In case if we have two parts of the system which communicate - we need to have some specification. Nad tehre is the cpecification - CRI. - -> The CRI is a plugin interface which enables the kubelet to use a wide variety of container runtimes, without having a need to recompile the cluster components. - -In this tutorial we will use [containerd](https://github.com/containerd/containerd) as tool for managing the containers on the node. - -On other hand there is a project under the Linux Foundation - OCI. -> The OCI is a project under the Linux Foundation is aims to develop open industry standards for container formats and runtimes. The primary goal of OCI is to ensure container portability and interoperability across different platforms and container runtime implementations. The OCI has two main specifications, Runtime Specification (runtime-spec) and Image Specification (image-spec). - -In this tutorial we will use [runc](https://github.com/opencontainers/runc) as tool for running containers. - -Now, we can start with the configuration. - ## runc -Lets download runc binaries +First of all, since Kubernetes is an orchestrator for containers, we would like to figure out how to run containers. +A thing like OCI can help us here. + +> The OCI is a project under the Linux Foundation is aims to develop open industry standards for container formats and runtimes. The primary goal of OCI is to ensure container portability and interoperability across different platforms and container runtime implementations. The OCI has two main specifications, Runtime Specification (runtime-spec) and Image Specification (image-spec). + +As we can see from the description - OCI is a standard that tells us what is a container image and how to run it. + +But it is only a standard, obviously there is some tool that implements this standard. And it is true, runc is a reference implementation of the OCI runtime specification. + +So let's install it and run some container with the usage of runc + +First of all we need to download runc binaries ```bash wget -q --show-progress --https-only --timestamping \ https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64 ``` -As download process complete, we need to move runc binaries to bin folder +After download process complete, we need to move runc binaries to proper folder ```bash { @@ -36,7 +34,7 @@ As download process complete, we need to move runc binaries to bin folder } ``` -Now, as we have runc installed, we can run busybox container +Now, as we have runc configured, we can run busybox container ```bash { @@ -51,7 +49,7 @@ sed -i 's/"sh"/"echo","Hello from container runned by runc!"/' config.json } ``` -Now, we created all proper files, required by runc to run the container (including container confguration and files which will be accesible from container). +On this step we downloaded the busybox immage, unarchived it and created proper files, required by runc to run the container (including container confguration and files which will be accesible from container). So, lets run our container ```bash runc run busybox @@ -62,7 +60,7 @@ Output: Hello from container runned by runc! ``` -Great, everything works, now we need to clean up our workspace +Great, we create our first container in this tutorial. Now we will clean up our workspace. ```bash { cd ~ @@ -72,16 +70,21 @@ rm -r busybox-container ## containerd -As already mentioned, Container Runtime: The software responsible for running and managing containers on the worker nodes. The container runtime is responsible for pulling images, creating containers, and managing container lifecycles +As we can see, runc can run containers, but runc interface is something unknown for kubernetes. -Now, let's download containerd. +There is another standert defined which is used by kubelet to communicate with container runtime - CRI +> The CRI is a plugin interface which enables the kubelet to use a wide variety of container runtimes, without having a need to recompile the cluster components. + +In this tutorial we will use [containerd](https://github.com/containerd/containerd) as tool which is compattible with CRI. + +To deploy containerd, first of all we need to download it. ```bash wget -q --show-progress --https-only --timestamping \ https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz ``` -Unzip containerd binaries to the bin directory +After download process complete, we need to unzip and move containerd binaries to proper folder ```bash { @@ -91,9 +94,11 @@ Unzip containerd binaries to the bin directory } ``` -In comparison to the runc, containerd is a service which can be called by someone to run container. +In comparison to the runc, containerd is a service works like a service which can be called by someone to run container. It means that we need to run it, before we can start comminucate with it. -Before we will run containerd service, we need to configure it. +We will configure containerd as a service. + +To do that, we need to create containerd configuration file ```bash { sudo mkdir -p /etc/containerd/ @@ -112,7 +117,7 @@ EOF As we can see, we configured containerd to use runc (we installed before) to run containers. -Now we can configure contanerd service +After configuration file create, we need to create containerd service ```bash cat < An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod. > The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. -So, lets set up kubelet and run some pod. +![image](./img/02_cluster_architecture_kubelet.png "Kubelet") -First of all we need to download kubelet binary +As we can see, in this section we will work with the next layer of kubernetes components (if I can say so). +Previously we worked with containers, but on this step we will work with other afteraction kubernetes has - pod. +As you remember at the end, kubernetes usually start pods. So now we will try to create it. But it a bit not usual way, instead of using kubernetes api (which we didn't configured yet), we will create pods with the usage of kubelet only. +To do that we will use the [static pods](https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/) functionality. + +So, lets begin. + +First of all we need to download kubelet. ```bash wget -q --show-progress --https-only --timestamping \ https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet ``` -Make file exacutable and move to the bin folder - +After download process complete, muve kubelet binaries to the proper folder ```bash { chmod +x kubelet @@ -26,8 +32,7 @@ Make file exacutable and move to the bin folder } ``` -And the last part when configuring kubelet - create service to run kubelet. - +As kubelet is a service which is used to manage pods running on the node, we need to configure that service ```bash cat < Static Pods are managed directly by the kubelet daemon on a specific node, without the API server observing them. -Before we will create static pod manifests, we need to create folders where we will place our pods (as we can see from kibelet configuration, it sould be /etc/kubernetes/manifests) +Before we will create static pod manifests, we need to create folders where we will place our pods (same as we configured in kubelet) ```bash { @@ -99,8 +102,7 @@ mkdir /etc/kubernetes/manifests } ``` -After directory created, we can create static with busybox inside - +After directory created, we can create static pod with busybox container inside ```bash cat < /etc/kubernetes/manifests/static-pod.yml apiVersion: v1 @@ -118,27 +120,68 @@ spec: EOF ``` -We can check if containerd runned new container +Now lets use the ctr tool we already know to list the containers created ```bash -ctr tasks ls +ctr containers ls ``` Output: ```bash -TASK PID STATUS +CONTAINER IMAGE RUNTIME ``` Looks like containerd didn't created any containrs yet? -Of course it may be true, but baed on the output of ctr command we can't answer that question. It is not true (of course it may be true, but based on the output of the ctr command we can't confirm that ////more about that here) +Of course it may be true, but based on the output of ctr command we can't confirm that. -To see containers managed by kubelet lets install [crictl](http://google.com/crictl). -Download binaries +Containerd has namespace feature. Namespace is a mechanism used to provide isolation and separation between different sets of resources. + +We can check containerd namespaces by running +```bash +ctr namespace ls +``` + +Output: +``` +NAME LABELS +default +k8s.io +``` + +Containers created by kubelet located in the k8s.io namespace, to see them run +```bash +ctr --namespace k8s.io containers ls +``` + +Output: +``` +CONTAINER IMAGE RUNTIME +33d2725dd9f343de6dd0d4b77161a532ae17d410b266efb31862605453eb54e0 k8s.gcr.io/pause:3.2 io.containerd.runtime.v1.linux +e75eb4ac89f32ccfb6dc6e894cb6b4429b6dc70eba832bc6dea4dc69b03dec6e sha256:af2c3e96bcf1a80da1d9b57ec0adc29f73f773a4a115344b7e06aec982157a33 io.containerd.runtime.v1.linux +``` + +And to get container status we can call +```bash +ctr --namespace k8s.io task ls +``` + +Output: +``` +TASK PID STATUS +e75eb4ac89f32ccfb6dc6e894cb6b4429b6dc70eba832bc6dea4dc69b03dec6e 1524 RUNNING +33d2725dd9f343de6dd0d4b77161a532ae17d410b266efb31862605453eb54e0 1472 RUNNING +``` + +But it is not what we expected, we expected to see container named busybox. Of course there is no majic, all anformation about pod to which this container belongs to, kubernetes containername, etc are located in the metadata on the container, and can be easilly extracted with the usage of other crt command (like this - ctr --namespace k8s.io containers info a597ed1f8dee6a43d398173754fd028c7ac481ee27e09ad4642187ed408814b4). but we want to see it in a bit more readable format, this is why, we will use different tool - [crictl](http://google.com/crictl). + +In Comparison to the ctr (which can work with containerd only), crictl is a tool which interracts with any CRI compliant runtime, containerd is only runtime we use in this tutorial. Also, cri ctl provide information in more "kubernetes" way (i mean it can show pods and containers with names like in kubernetes). + +So, lets download crictl binaries ```bash wget -q --show-progress --https-only --timestamping \ https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz ``` -Install (move to bin folder) +After download process complete, move crictl binaries to the proper folder ```bash { tar -xvf crictl-v1.21.0-linux-amd64.tar.gz @@ -146,7 +189,8 @@ Install (move to bin folder) sudo mv crictl /usr/local/bin/ } ``` -And configure a bit + +And configure it a bit ```bash cat < /etc/crictl.yaml runtime-endpoint: unix:///run/containerd/containerd.sock @@ -156,6 +200,8 @@ debug: false EOF ``` +As already mentioned, crictl can be configured to use any CRI complient runtime, in our case we configured containerd (by providing containerd socket path). + And we can finaly get the list of pods running on our server ```bash crictl pods @@ -196,7 +242,7 @@ Hello from static pod Great, now we can run pods on our server. -Before we will continue, remove our pods running +Now, lets clean up our worspace and continue with the next section ```bash rm /etc/kubernetes/manifests/static-pod.yml ``` diff --git a/docs/03-pod-networking.md b/docs/03-pod-networking.md index 200f273..f49c409 100644 --- a/docs/03-pod-networking.md +++ b/docs/03-pod-networking.md @@ -1,10 +1,10 @@ # Pod networking -In this part of tutorial, we will have closer look at the container networking -And lets start with nginx runned inside container. +Now, we know how kubelet runs containers and we know how to run pod without other kubernetes cluster components. -Create manifest for nginx static pod +Let's experiment with static pod a bit. +We will create static pod, but this time we will run nginx, instead of busybox ```bash cat < /etc/kubernetes/manifests/static-nginx.yml apiVersion: v1 @@ -68,8 +68,6 @@ Commercial support is available at ``` Now, lets try to create 1 more nginx container. - - ```bash cat < /etc/kubernetes/manifests/static-nginx-2.yml apiVersion: v1 @@ -130,7 +128,8 @@ nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) As we can see, the reason of the exit state - adress already in use. Our address already in use by our other container. -We received this error because we run two pods with configuration +We received this error because we run two pods which require an access to the same port on our server. +This was done by specifying ``` ... spec: @@ -138,9 +137,9 @@ spec: ... ``` -As we can see our pod are runned in host network. -Lets try to fix this by updating our manifests to run containers in not host network. +This option runs our container on our host without any network isolation (almost the same as running two nginx without on the same host without containers) +Now we will try to update our pod manifests to run our containers in separate network "namespaces" ```bash { cat < /etc/kubernetes/manifests/static-nginx.yml @@ -171,8 +170,9 @@ EOF } ``` -And check our pods once again +As you can see we simply removed hostNetwork: true configuration option. +So, lets check what we have ```bash crictl pods ``` @@ -201,9 +201,9 @@ As we can see cni plugin is not initialized. But what is cni plugin. > A CNI plugin is a binary executable that is responsible for configuring the network interfaces and routes of a container or pod. It communicates with the container runtime (such as Docker or CRI-O) to set up networking for the container or pod. -As we can see kubelet can't configure network for pod by himself, same as with containers, to configure network kubelet use some 'protocol' to communicate with 'someone' who can configure networ. +As we can see kubelet can't configure network for pod by himself (or with the help of containerd). Same as with containers, to configure network kubelet use some 'protocol' to communicate with 'someone' who can configure networ. -Now, we will configure the cni plugin 1for our instalation. +Now, we will configure the cni plugin for our kubelet. First of all we need to download that plugin @@ -262,8 +262,11 @@ EOF } ``` -And finaly we need to update our kubelet config (add network-plugin configuration option) +Of course all configuration options here important, but I want to highlight 2 of them: +- ranges - information about subnets from shich ip addresses will be assigned for our pods +- routes - information on how to route trafic between nodes, as we have single node kubernetes cluster the configuration is very easy +Update our kubelet config (add network-plugin configuration option) ```bash cat < ``` -Now, when we fixed everything, lets ckeck if our pods are in running state - +Now, after all fixes applyed and we have working kubelet, we can check wheather our pods created ```bash crictl pods ``` @@ -347,7 +347,7 @@ CONTAINER IMAGE CREATED STATE They are also in running state On this step if we will try to curl localhost nothing will happen. -Our pods are runned in separate network namespaces, and each pod has own ip address. +Our pods are runned in separate network namespaces, and each pod has its own ip address. We need to define it. ```bash @@ -370,9 +370,9 @@ Output: ... ``` -During the plugin configuration we remember that we configure the subnet pod our pods to be 10.240.1.0/24. -Now, we can curl our container. +During the plugin configuration we remember that we configure the subnet pod our pods to be 10.240.1.0/24. So, the container received its IP from the range specified, in my case it was 10.240.1.1. +So, lets try to curl our container. ```bash { PID=$(crictl pods --label app=static-nginx-2 -q) @@ -409,7 +409,7 @@ Commercial support is available at ``` -As we can see we successfully reached out container. +As we can see we successfully reached out container from our host. But we remember that cni plugin also responsible to configure communication between containers. Lets check @@ -493,7 +493,9 @@ written to stdout As we can see we successfully reached our container from busybox. -Now, we will clean up workplace +In this section we configured CNI plugin for our intallation and now we can run pods which can communicate with each other over the network. + +In nest section we will procede with the kubernetes cluster configuration, but before, we need to clean up workspace. ```bash rm /etc/kubernetes/manifests/static-* ``` diff --git a/docs/04-etcd.md b/docs/04-etcd.md index 7a9db0c..4c537e4 100644 --- a/docs/04-etcd.md +++ b/docs/04-etcd.md @@ -1,27 +1,38 @@ # ETCD -At this point we already know that we can run pods even withour API server. But current aproach os not very confortable to use, to create pod we need to place some manifest in some place. it is not very comfortable to manage. Now we will start our jorney of configuring "real" kubernetes. And of cource all our manifests should be stored somewhere. +At this point we already know that we can run pods even withour API server. But current aproach is not very confortable to use, to create pod we need to place some manifest in some place. It is not very comfortable to manage. Now we will start our jorney of configuring "real" (more real than current, because current doesn't look like kubernetes at all) kubernetes. And of course we need to start with the storage. ![image](./img/04_cluster_architecture_etcd.png "Kubelet") -For kubernetes (at least for original one it I can say so) we need to configura database called ETCD. +For kubernetes (at least for original one if I can say so) we need to configura database called [etcd](https://etcd.io/). -To configure db (and other kubennetes components in future) we will need some tools to configure certificates. +>etcd is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines. It gracefully handles leader elections during network partitions and can tolerate machine failure, even in the leader node. +Our etcd will be configured as single node database with authentication (by useage of client cert file). + +So, lets start. + +As I already said, communication with our etcd cluster will be secured, it means that we need to generate some keys, to encrypt all the trafic. +To do so, we need to download tools which may help us to generate certificates +```bash +wget -q --show-progress --https-only --timestamping \ + https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64 \ + https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64 +``` + +And install ```bash { - wget -q --show-progress --https-only --timestamping \ - https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \ - https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson - chmod +x cfssl cfssljson - sudo mv cfssl cfssljson /usr/local/bin/ +mv cfssl_1.4.1_linux_amd64 cfssl +mv cfssljson_1.4.1_linux_amd64 cfssljson +chmod +x cfssl cfssljson +sudo mv cfssl cfssljson /usr/local/bin/ } ``` -And now lets begin our etcd configuration journey. - -First of all we will create ca certificate file. +After the tools installed successfully, we need to generate ca certificate. +A ca (Certificate Authority) certificate, also known as a root certificate or a trusted root certificate, is a digital certificate that is used to verify the authenticity of other certificates. ```bash { cat > ca-config.json < to simplify our kubernetes deployment, we will use this certificate for other kubernetes components as well, that is why we will add some extra configs (like KUBERNETES_HOST_NAME to it) ```bash { HOST_NAME=$(hostname -a) @@ -113,14 +125,13 @@ kubernetes-key.pem kubernetes.pem ``` -Now, when we have all required certs, we need to download etcd - +Now, we have all required certificates, so, lets download etcd ```bash wget -q --show-progress --https-only --timestamping \ "https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz" ``` -Decompres and install it to the proper folder +After donload complete, we can move etcd binaries to proper folders ```bash { tar -xvf etcd-v3.4.15-linux-amd64.tar.gz @@ -128,8 +139,7 @@ Decompres and install it to the proper folder } ``` -When etcd is installed, we need to move our generated certificates to the proper folder - +Now, we can start wioth the configurations of the etcd service. First of all, we need to discribute previuosly generated certificates to the proper folder ```bash { sudo mkdir -p /etc/etcd /var/lib/etcd @@ -141,7 +151,6 @@ When etcd is installed, we need to move our generated certificates to the proper ``` Create etcd service configuration file - ```bash cat < The Kubernetes API server validates and configures data for the api objects which include pods, services, replicationcontrollers, and others. The API Server services REST operations and provides the frontend to the cluster's shared state through which all other components interact. + +As you can see from the description adpi server is central (not main) component of kubernetes cluster. ![image](./img/05_cluster_architecture_apiserver.png "Kubelet") -так як ми уже налаштували бд - можна починати налаштовувати і сам куб апі сервер, будемо пробувати щось четапити +## certificates + +Before we begin with configuration of API server, we need to create certificates for kubernetes that will be used to sign service account tokens. ```bash { @@ -34,42 +41,32 @@ cfssl gencert \ } ``` +Now, we need to distbibute certificates to the api server configuration folder ```bash { -cat > admin-csr.json < encryption-config.yaml < /var/lib/kubernetes/encryption-config.yaml < admin-csr.json < +... +``` +After our kubelet is in running state, we can check if it is registered in API server ```bash kubectl get nodes ``` +Output: ``` -NAME STATUS ROLES AGE VERSION -example-server NotReady 6s v1.21.0 +NAME STATUS ROLES AGE VERSION +example-server Ready 1m2s v1.21.0 ``` -ох ти, раз є ноди, маж бути і контейнер +After our kubelet registered in API server we can check wheather our pod is in running state ```bash kubectl get pod ``` +Output: ``` NAME READY STATUS RESTARTS AGE hello-world 1/1 Running 0 8m1s ``` -пише що запущений, а що в дійсності? - +As we can see out pod is in running state. In addition to this check we can also check if our pod is really in running state by using crictl +Pods ```bash crictl pods ``` +Output: ``` POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME 1719d0202a5ef 8 minutes ago Ready hello-world default 0 (default) ``` - -```bash -crictl ps -``` - -``` -CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID -3f2b0a0d70377 7cfbbec8963d8 8 minutes ago Running hello-world-container 0 1719d0202a5ef -``` - -навіть логи можна глянути +Also we can see logs from our pod ```bash crictl logs $(crictl ps -q) ``` +Output: ``` Hello, World! Hello, World! @@ -184,24 +235,21 @@ Hello, World! ... ``` -але так не діло логи дивитись -тепер коли ми зрозуміли що сервер наш працює і показує правду - можна користуватись тільки куб сітіель +But now, lets view logs using kubectl instead of crictl. In our case it is maybe not very important, but in case of cluster with more than 1 node it is important, as crictl allows to read info only about pods on node, when kubectl (by communicating with api server) allows to read info from all nodes. ```bash kubectl logs hello-world ``` +Output: ``` Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy) ( pods/log hello-world) ``` -причина помилки відсутність всяких приколів -давайте їх створимо - - +As we can see api server has no permissions to read logs from the node. This message apears, because during authorization, kubelet ask api server if the user with the name kubernetes has proper permission, but now it is not true. So let's fix this ```bash { -cat < rbac.yml +cat < node-auth.yml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: @@ -225,14 +273,16 @@ subjects: name: kubernetes EOF -kubectl apply -f rbac.yml +kubectl apply -f node-auth.yml } ``` +After our cluster role and role binding creted we can retry ```bash kubectl logs hello-world ``` +Output: ``` Hello, World! Hello, World! @@ -241,6 +291,22 @@ Hello, World! ... ``` -ух ти тепер точно все працює, можна користуватись кубернетесом, але стоп у нас все ще є речі які на нашому графіку сірі давайте розбиратись +As you can see, we can create pods and kubelet will run that pods. +Now, we need to clean-up out workspace. +```bash +kubectl delete -f pod.yaml +``` + +Check if pod deleted +```bash +kubectl get pod +``` + +Outpput: +``` +No resources found in default namespace. +``` + +Next: [Scheduler](./07-controller-manager.md) Next: [Controller manager](./07-controller-manager.md) \ No newline at end of file diff --git a/docs/07-scheduler.md b/docs/07-scheduler.md new file mode 100644 index 0000000..c8a1a91 --- /dev/null +++ b/docs/07-scheduler.md @@ -0,0 +1,264 @@ +# Scheduler + +In this section we will configure scheduler. + +![image](./img/07_cluster_architecture_scheduler.png "Kubelet") + +> In Kubernetes, a scheduler is a core component responsible for assigning and placing workloads (such as pods) onto available nodes in a cluster. It ensures that the cluster's resources are utilized efficiently and that workloads are scheduled based on their resource requirements and other constraints. +> Kublet, regularly request the list of pods assigned to it. In case if new pod appear, kubelet will run new pod. In case if pod marked as deleted, kubelet will start termination process. + +In previous section, we created pod and it was runed on the node, but why? +The reason of that, we specified the node name on which to run the pod by our self +```bash + nodeName: ${HOST_NAME} +``` + +So, lets create pod without node specified +```bash +{ +cat < pod.yaml +apiVersion: v1 +kind: Pod +metadata: + name: hello-world +spec: + serviceAccountName: hello-world + containers: + - name: hello-world-container + image: busybox + command: ['sh', '-c', 'while true; do echo "Hello, World!"; sleep 1; done'] +EOF + +kubectl apply -f pod.yaml +} +``` + +And check pod status +```bash +kubectl get pod -o wide +``` + +Output: +``` +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +hello-world 0/1 Pending 0 19s +``` + +As we can see node field of our pod is none and pod is in pending state. +So, lets, configure scheduler and check if it will solve the issue. + +## certificates + +We will start with certificates. + +As you remeber we configured our API server cto use client certificate to authenticate user. +So, lets create proper certificate for the scheduler +```bash +{ +cat > kube-scheduler-csr.json < +``` + +As you can see, our pod still in pending mode. + +To define the reason of this, we will review the logs of our scheduler. +```bash +... +May 21 20:52:25 example-server kube-scheduler[91664]: I0521 20:52:25.471604 91664 factory.go:338] "Unable to schedule pod; no fit; waiting" pod="default/hello-world" err="0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate." +... +``` + +As we can see our pod wasn't assigned to the node because node has some taint, lets check our node taints. +```bash +kubectl get nodes $(hostname -a) -o jsonpath='{.spec.taints}' +``` + +Output: +``` +[{"effect":"NoSchedule","key":"node.kubernetes.io/not-ready"}] +``` + +As you can see, our node has taint with efect no schedule. The reason of this???? +But lets fix this. +```bash +kubectl taint nodes $(hostname -a) node.kubernetes.io/not-ready:NoSchedule- +``` + +And check our pods list again +```bash +kubectl get pod -o wide +``` + +Output: +``` +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +hello-world 1/1 Running 0 29m 10.240.1.3 example-server +``` + +As you can see out pod is in running state, means that scheduler works as expected. + +Now we need to clean-up our wirkspace +```bash +kubectl delete -f pod.yaml +``` + +Next: [Controller manager](./08-controller-manager.md ) \ No newline at end of file diff --git a/docs/07-controller-manager.md b/docs/08-controller-manager.md similarity index 58% rename from docs/07-controller-manager.md rename to docs/08-controller-manager.md index 2299406..a207fec 100644 --- a/docs/07-controller-manager.md +++ b/docs/08-controller-manager.md @@ -1,14 +1,14 @@ # Controller manager -![image](./img/07_cluster_architecture_controller_manager.png "Kubelet") +In this part we will configure controller-manager. -для того щоб відчути весь смак - давайте почнемо із того що вернемось трохи не до конфігураційних всяких штук а до абстракцій кубернетесу +![image](./img/08_cluster_architecture_controller_manager.png "Kubelet") -і власне наша наступна абстракція - деплоймент - -тож давайте її створимо +>Controller Manager is a core component responsible for managing various controllers that regulate the desired state of the cluster. It runs as a separate process on the Kubernetes control plane and includes several built-in controllers +To see controller manager in action, we will create deployment before controller manager configured. ```bash +{ cat < deployment.yaml apiVersion: apps/v1 kind: Deployment @@ -32,24 +32,31 @@ spec: EOF kubectl apply -f deployment.yaml +} ``` +Check created deployment status: ```bash kubectl get deploy ``` +Output: ``` NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 0/1 0 0 24s ``` -такс, щось пішло не так -чомусь наші поди не створюються - а мають +As we can se our deployment isn't in ready state. -за те щоб поди створювались відповідає контроллєр менеджер, а у нас його немає -тож давайте цю проблему вирішувати +As we already mentioned, in kubernetes controller manager is responsible to ensure that desired state of the cluster equals to the actual state. In our case it means that deployment controller should create replicaset and replicaset controller should create pod which will be assigned to nodes by scheduler. But as controller manager is not configured - nothing happen with created deployment. +So, lets configure controller manager. +## certificates +We will start with certificates. + +As you remeber we configured our API server cto use client certificate to authenticate user. +So, lets create proper certificate for the controller manager ```bash { cat > kube-controller-manager-csr.json < -nginx-deployment-5d9cbcf759-x4pk8 0/1 Pending 0 5m22s +NAME READY STATUS RESTARTS AGE +deployment-74fc7cdd68-89rqw 1/1 Running 0 67s ``` -бачимо що йому ніхто ще не проставив ноду, а без ноди кублєт сам не запустить под +Now, when our controller manager configured, lets clean up our workspace. +```bash +kubectl delete -f deployment.yaml +``` -Next: [Scheduler](./08-scheduler.md) \ No newline at end of file +Next: [Kube-proxy](./09-kubeproxy.md) \ No newline at end of file diff --git a/docs/08-scheduler.md b/docs/08-scheduler.md deleted file mode 100644 index 4ca2d8f..0000000 --- a/docs/08-scheduler.md +++ /dev/null @@ -1,151 +0,0 @@ -# Scheduler - -![image](./img/08_cluster_architecture_scheduler.png "Kubelet") - -```bash -{ - -cat > kube-scheduler-csr.json < -nginx-deployment-5d9cbcf759-x4pk8 1/1 Running 0 9m34s 10.240.1.10 example-server -``` - -```bash -kubectl logs nginx-deployment-5d9cbcf759-x4pk8 -``` - -``` -Hello, World from deployment! -Hello, World from deployment! -Hello, World from deployment! -... -``` - -Next: [Kube proxy](./09-kubeproxy.md) \ No newline at end of file diff --git a/docs/09-kubeproxy.md b/docs/09-kubeproxy.md index ed55d8d..e3c29e2 100644 --- a/docs/09-kubeproxy.md +++ b/docs/09-kubeproxy.md @@ -1,11 +1,26 @@ -# Kubeproxy +# Kube-proxy ![image](./img/09_cluster_architecture_proxy.png "Kubelet") такс, ```bash +{ cat < nginx-deployment.yml +apiVersion: v1 +kind: ConfigMap +metadata: + name: nginx-conf +data: + default.conf: | + server { + listen 80; + server_name _; + location / { + return 200 "Hello from pod: \$hostname\n"; + } + } +--- apiVersion: apps/v1 kind: Deployment metadata: @@ -25,25 +40,66 @@ spec: image: nginx:1.21.3 ports: - containerPort: 80 + volumeMounts: + - name: nginx-conf + mountPath: /etc/nginx/conf.d + volumes: + - name: nginx-conf + configMap: + name: nginx-conf EOF kubectl apply -f nginx-deployment.yml +} ``` ```bash kubectl get pod -o wide ``` +Output: ``` -NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES -hello-world 1/1 Running 0 109m 10.240.1.9 example-server -nginx-deployment-5d9cbcf759-x4pk8 1/1 Running 0 84m 10.240.1.14 example-server +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +nginx-deployment-db9778f94-2zv7x 1/1 Running 0 63s 10.240.1.12 example-server +nginx-deployment-db9778f94-q5jx4 1/1 Running 0 63s 10.240.1.10 example-server +nginx-deployment-db9778f94-twx78 1/1 Running 0 63s 10.240.1.11 example-server ``` -нам потрібна айпі адреса поду з деплойменту, в моєму випадку 10.240.1.10 -запам'ятаємо її +now, we will run busybox container and will try to access our pods from other container ```bash +{ +cat < pod.yaml +apiVersion: v1 +kind: Pod +metadata: + name: busy-box +spec: + containers: + - name: busy-box + image: busybox + command: ['sh', '-c', 'while true; do echo "Busy"; sleep 1; done'] +EOF + +kubectl apply -f pod.yaml +} +``` + +and execute command from our container + +```bash +kubectl exec busy-box -- wget -O - $(kubectl get pod -o wide | grep nginx | awk '{print $6}' | head -n 1) +``` + +Output: +``` +error: unable to upgrade connection: Forbidden (user=kubernetes, verb=create, resource=nodes, subresource=proxy) +``` + +error occured because api server has no access to execute commands + +```bash +{ cat < rbac-create.yml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 @@ -69,44 +125,27 @@ roleRef: EOF kubectl apply -f rbac-create.yml +} ``` -``` -kubectl exec hello-world -- wget -O - 10.240.1.14 +and execute command from our container + +```bash +kubectl exec busy-box -- wget -O - $(kubectl get pod -o wide | grep nginx | awk '{print $6}' | head -n 1) ``` +Output: ``` - - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - -Connecting to 10.240.1.14 (10.240.1.14:80) +Hello from pod: nginx-deployment-68b9c94586-qkwjc +Connecting to 10.32.0.230 (10.32.0.230:80) writing to stdout -- 100% |********************************| 615 0:00:00 ETA +- 100% |********************************| 50 0:00:00 ETA written to stdout ``` -але це не прикольно, хочу звертатись до нджінк деплойменту і щоб воно там само працювало -знаю що є сервіси - давай через них +it is not very interesting to access pods by ip, we want to have some automatic load balancing +we know that services may help us with that + ```bash { @@ -128,20 +167,30 @@ kubectl apply -f nginx-service.yml } ``` +get our server + ```bash kubectl get service ``` -такс тепер беремо айпішнік того сервісу (у моєму випадку 10.32.0.95) -і спробуємо повторити те саме +and try to ping our containers by service ip ```bash -kubectl exec hello-world -- wget -O - 10.32.0.95 +kubectl exec busy-box -- wget -O - $(kubectl get service -o wide | grep nginx | awk '{print $3}') ``` -і нічого (тут можна згадати ще про ендпоінти і тп, але то може бути просто на довго) -головна причина чого не працює на даному етапі - у нас не запущений ще 1 важливий компонент -а саме куб проксі +Output: +``` +Connecting to 10.32.0.230 (10.32.0.230:80) +``` + +hm, nothing happen, the reason - our cluster do not know how to connect to service ip + +this is responsibiltiy of kube-proxy + +it means that we need to configure kube-proxy + +as usually we will start with certs ```bash { @@ -174,6 +223,7 @@ cfssl gencert \ } ``` +now connection config ```bash { @@ -198,16 +248,22 @@ cfssl gencert \ } ``` +now, download kube-proxy + ```bash wget -q --show-progress --https-only --timestamping \ https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy ``` +create proper folders + ```bash sudo mkdir -p \ /var/lib/kube-proxy ``` +install binaries + ```bash { chmod +x kube-proxy @@ -215,10 +271,14 @@ sudo mkdir -p \ } ``` +move connection config to proper folder + ```bash sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig ``` +create kube-proxy config file + ```bash cat < - - -Welcome to nginx! - - - -

Welcome to nginx!

-

If you see this page, the nginx web server is successfully installed and -working. Further configuration is required.

- -

For online documentation and support please refer to -nginx.org.
-Commercial support is available at -nginx.com.

- -

Thank you for using nginx.

- - -Connecting to 10.32.0.95 (10.32.0.95:80) +Hello from pod: nginx-deployment-68b9c94586-qkwjc +Connecting to 10.32.0.230 (10.32.0.230:80) writing to stdout -- 100% |********************************| 615 0:00:00 ETA +- 100% |********************************| 50 0:00:00 ETA written to stdout ``` -ух ти у нас все вийшло + +if you try to repeat the command once again you will see that requests are handled by different pods + +great we successfully configured kubeproxy and can balance trafic between containers Next: [DNS in Kubernetes](./10-dns.md) \ No newline at end of file diff --git a/docs/10-dns.md b/docs/10-dns.md index 9a23f05..348a6fd 100644 --- a/docs/10-dns.md +++ b/docs/10-dns.md @@ -1,52 +1,36 @@ -# dns +# DNS in Kubernetes -такс, це звісно приколно що можна по айпішніку, але я читав що можна по назві сервісу звертатись +Again, it is very interesting to access the service by ip but we know that we can access it by service name +Lets try, ```bash -kubectl exec hello-world -- wget -O - nginx-service +kubectl exec busy-box -- wget -O - nginx-service ``` -не особо працює, щось пішло не так +and nothing happen -а так тому, що ми не поставили деенес адон -але нічого, зараз ми то виправимо +the reason is DNS server which we still not configured + +but dns server we can install from kubernetes directly ```bash kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml ``` -ну у мене не особо запрацювало -потрібно зробити зміни у кублєті +and try to erpeat + ```bash -cat < ca-config.json < ca-csr.json < kubernetes-csr.json < service-account-csr.json < admin-csr.json < encryption-config.yaml < pod.yaml -apiVersion: v1 -kind: Pod -metadata: - name: hello-world -spec: - serviceAccountName: hello-world - containers: - - name: hello-world-container - image: busybox - command: ['sh', '-c', 'while true; do echo "Hello, World!"; sleep 1; done'] - nodeName: worker -EOF - -cat < sa.yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: hello-world -automountServiceAccountToken: false -EOF - -kubectl apply -f sa.yaml --kubeconfig=admin.kubeconfig -kubectl apply -f pod.yaml --kubeconfig=admin.kubeconfig -} -``` - -kubelet - -????, ага ще напевно потрібно виписувати сертифікати на публічний айпішнік -```bash -sudo echo "127.0.0.1 worker" >> /etc/hosts -``` - -```bash -{ -cat > kubelet-csr.json < nginx-pod.yaml -apiVersion: v1 -kind: Pod -metadata: - name: nginx-pod -spec: - serviceAccountName: hello-world - containers: - - name: nginx-container - image: nginx - ports: - - containerPort: 80 - nodeName: worker -EOF - - -kubectl apply -f nginx-pod.yaml --kubeconfig=admin.kubeconfig -``` - -```bash -kubectl get pod nginx-pod --kubeconfig=admin.kubeconfig -o=jsonpath='{.status.podIP}' -``` - -```bash -curl $(kubectl get pod nginx-pod --kubeconfig=admin.kubeconfig -o=jsonpath='{.status.podIP}') -``` - -```bash -kubectl delete -f nginx-pod.yaml --kubeconfig=admin.kubeconfig -kubectl delete -f pod.yaml --kubeconfig=admin.kubeconfig -kubectl delete -f sa.yaml --kubeconfig=admin.kubeconfig -``` - -```bash -cat < nginx-deployment.yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: nginx-deployment -spec: - replicas: 3 - selector: - matchLabels: - app: nginx - template: - metadata: - labels: - app: nginx - spec: - containers: - - name: nginx-container - image: nginx - ports: - - containerPort: 80 -EOF - -kubectl apply -f nginx-deployment.yaml --kubeconfig=admin.kubeconfig -``` - -```bash -kubectl get pod --kubeconfig=admin.kubeconfig -``` - -```bash -kubectl get deployment --kubeconfig=admin.kubeconfig -``` -такс деплоймент є а подів немає - неподобство - -# controller manager - -```bash -{ -cat > kube-controller-manager-csr.json < kube-scheduler-csr.json <