Update manuals
parent
4d5041e29e
commit
864e1cd836
|
@ -10,7 +10,7 @@ To configure the cluster mentioned, we will use Ubuntu server 20.04 (author uses
|
||||||
|
|
||||||
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a> (whatever it means).
|
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a> (whatever it means).
|
||||||
|
|
||||||
# Labs
|
## Labs
|
||||||
|
|
||||||
* [Cluster architecture](./docs/00-kubernetes-architecture.md)
|
* [Cluster architecture](./docs/00-kubernetes-architecture.md)
|
||||||
* [Container runtime](./docs/01-container-runtime.md)
|
* [Container runtime](./docs/01-container-runtime.md)
|
||||||
|
@ -19,7 +19,7 @@ To configure the cluster mentioned, we will use Ubuntu server 20.04 (author uses
|
||||||
* [ETCD](./docs/04-etcd.md)
|
* [ETCD](./docs/04-etcd.md)
|
||||||
* [Api Server](./docs/05-apiserver.md)
|
* [Api Server](./docs/05-apiserver.md)
|
||||||
* [Apiserver - Kubelet integration](./docs/06-apiserver-kubelet.md)
|
* [Apiserver - Kubelet integration](./docs/06-apiserver-kubelet.md)
|
||||||
* [Controller manager](./docs/07-controller-manager.md)
|
* [Scheduler](./docs/07-scheduler.md)
|
||||||
* [Scheduler](./docs/08-scheduler.md)
|
* [Controller manager](./docs/08-controller-manager.md)
|
||||||
* [Kube proxy](./docs/09-kubeproxy.md)
|
* [Kube-proxy](./docs/09-kubeproxy.md)
|
||||||
* [DNS in Kubernetes](./docs/10-dns.md)
|
* [DNS in Kubernetes](./docs/10-dns.md)
|
||||||
|
|
|
@ -1,32 +1,30 @@
|
||||||
# Container runtime
|
# Container runtime
|
||||||
|
|
||||||
In this part of our tutorial we will focus of the container runtime.
|
In this section, we will focus on the container runtime, as it is part of the Kubernetes which is responsible for running containers.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Firt of all, container runtime is a tool which can be used by other kubernetes components (kubelet) to manage containers. In case if we have two parts of the system which communicate - we need to have some specification. Nad tehre is the cpecification - CRI.
|
|
||||||
|
|
||||||
> The CRI is a plugin interface which enables the kubelet to use a wide variety of container runtimes, without having a need to recompile the cluster components.
|
|
||||||
|
|
||||||
In this tutorial we will use [containerd](https://github.com/containerd/containerd) as tool for managing the containers on the node.
|
|
||||||
|
|
||||||
On other hand there is a project under the Linux Foundation - OCI.
|
|
||||||
> The OCI is a project under the Linux Foundation is aims to develop open industry standards for container formats and runtimes. The primary goal of OCI is to ensure container portability and interoperability across different platforms and container runtime implementations. The OCI has two main specifications, Runtime Specification (runtime-spec) and Image Specification (image-spec).
|
|
||||||
|
|
||||||
In this tutorial we will use [runc](https://github.com/opencontainers/runc) as tool for running containers.
|
|
||||||
|
|
||||||
Now, we can start with the configuration.
|
|
||||||
|
|
||||||
## runc
|
## runc
|
||||||
|
|
||||||
Lets download runc binaries
|
First of all, since Kubernetes is an orchestrator for containers, we would like to figure out how to run containers.
|
||||||
|
A thing like OCI can help us here.
|
||||||
|
|
||||||
|
> The OCI is a project under the Linux Foundation is aims to develop open industry standards for container formats and runtimes. The primary goal of OCI is to ensure container portability and interoperability across different platforms and container runtime implementations. The OCI has two main specifications, Runtime Specification (runtime-spec) and Image Specification (image-spec).
|
||||||
|
|
||||||
|
As we can see from the description - OCI is a standard that tells us what is a container image and how to run it.
|
||||||
|
|
||||||
|
But it is only a standard, obviously there is some tool that implements this standard. And it is true, runc is a reference implementation of the OCI runtime specification.
|
||||||
|
|
||||||
|
So let's install it and run some container with the usage of runc
|
||||||
|
|
||||||
|
First of all we need to download runc binaries
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64
|
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64
|
||||||
```
|
```
|
||||||
|
|
||||||
As download process complete, we need to move runc binaries to bin folder
|
After download process complete, we need to move runc binaries to proper folder
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
|
@ -36,7 +34,7 @@ As download process complete, we need to move runc binaries to bin folder
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Now, as we have runc installed, we can run busybox container
|
Now, as we have runc configured, we can run busybox container
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
|
@ -51,7 +49,7 @@ sed -i 's/"sh"/"echo","Hello from container runned by runc!"/' config.json
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Now, we created all proper files, required by runc to run the container (including container confguration and files which will be accesible from container).
|
On this step we downloaded the busybox immage, unarchived it and created proper files, required by runc to run the container (including container confguration and files which will be accesible from container). So, lets run our container
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
runc run busybox
|
runc run busybox
|
||||||
|
@ -62,7 +60,7 @@ Output:
|
||||||
Hello from container runned by runc!
|
Hello from container runned by runc!
|
||||||
```
|
```
|
||||||
|
|
||||||
Great, everything works, now we need to clean up our workspace
|
Great, we create our first container in this tutorial. Now we will clean up our workspace.
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
cd ~
|
cd ~
|
||||||
|
@ -72,16 +70,21 @@ rm -r busybox-container
|
||||||
|
|
||||||
## containerd
|
## containerd
|
||||||
|
|
||||||
As already mentioned, Container Runtime: The software responsible for running and managing containers on the worker nodes. The container runtime is responsible for pulling images, creating containers, and managing container lifecycles
|
As we can see, runc can run containers, but runc interface is something unknown for kubernetes.
|
||||||
|
|
||||||
Now, let's download containerd.
|
There is another standert defined which is used by kubelet to communicate with container runtime - CRI
|
||||||
|
> The CRI is a plugin interface which enables the kubelet to use a wide variety of container runtimes, without having a need to recompile the cluster components.
|
||||||
|
|
||||||
|
In this tutorial we will use [containerd](https://github.com/containerd/containerd) as tool which is compattible with CRI.
|
||||||
|
|
||||||
|
To deploy containerd, first of all we need to download it.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz
|
https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz
|
||||||
```
|
```
|
||||||
|
|
||||||
Unzip containerd binaries to the bin directory
|
After download process complete, we need to unzip and move containerd binaries to proper folder
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
|
@ -91,9 +94,11 @@ Unzip containerd binaries to the bin directory
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
In comparison to the runc, containerd is a service which can be called by someone to run container.
|
In comparison to the runc, containerd is a service works like a service which can be called by someone to run container. It means that we need to run it, before we can start comminucate with it.
|
||||||
|
|
||||||
Before we will run containerd service, we need to configure it.
|
We will configure containerd as a service.
|
||||||
|
|
||||||
|
To do that, we need to create containerd configuration file
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
sudo mkdir -p /etc/containerd/
|
sudo mkdir -p /etc/containerd/
|
||||||
|
@ -112,7 +117,7 @@ EOF
|
||||||
|
|
||||||
As we can see, we configured containerd to use runc (we installed before) to run containers.
|
As we can see, we configured containerd to use runc (we installed before) to run containers.
|
||||||
|
|
||||||
Now we can configure contanerd service
|
After configuration file create, we need to create containerd service
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
|
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
|
||||||
[Unit]
|
[Unit]
|
||||||
|
@ -137,7 +142,7 @@ WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
Run service
|
And now, run it
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
|
@ -146,12 +151,12 @@ Run service
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Ensure if service is in running state
|
To ensure that our service successfully started, run
|
||||||
```bash
|
```bash
|
||||||
sudo systemctl status containerd
|
sudo systemctl status containerd
|
||||||
```
|
```
|
||||||
|
|
||||||
We should see the output like this
|
The output should be similar to
|
||||||
```
|
```
|
||||||
● containerd.service - containerd container runtime
|
● containerd.service - containerd container runtime
|
||||||
Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: enabled)
|
Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: enabled)
|
||||||
|
@ -166,16 +171,16 @@ We should see the output like this
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
As we have running containerd service, we can run some containers.
|
Now, we have containerd service running. It means that we can try to create some container.
|
||||||
|
|
||||||
To do that, we need the tool called [ctr](//todo), it is distributed as part of containerd.
|
To do that, we need the tool called [ctr](https://github.com/projectatomic/containerd/blob/master/docs/cli.md), which is distributed as part of containerd (means that we already installed it during installation of containerd).
|
||||||
|
|
||||||
Lets pull busbox image
|
First of all we will pull busybox image
|
||||||
```bash
|
```bash
|
||||||
sudo ctr images pull docker.io/library/busybox:latest
|
sudo ctr images pull docker.io/library/busybox:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
And check if it is presented on our server
|
After pull process complete - check our image
|
||||||
```bash
|
```bash
|
||||||
ctr images ls
|
ctr images ls
|
||||||
```
|
```
|
||||||
|
@ -188,17 +193,53 @@ docker.io/library/busybox:latest application/vnd.docker.distribution.manifest.li
|
||||||
|
|
||||||
Now, lets start our container.
|
Now, lets start our container.
|
||||||
```bash
|
```bash
|
||||||
ctr run --rm --detach docker.io/library/busybox:latest busybox-container sh -c 'echo "Hello from container runned by containerd!"'
|
ctr run -t --rm --detach docker.io/library/busybox:latest busybox-container sh -c 'echo "Hello from container runned by containerd!"'
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Our container successfully started,
|
||||||
|
|
||||||
Output:
|
Output:
|
||||||
```bash
|
```bash
|
||||||
Hello from container runned by containerd!
|
Hello from container runned by containerd!
|
||||||
```
|
```
|
||||||
|
|
||||||
Now, lets clean-up.
|
As we can see we successfully started container, now we can check it status
|
||||||
```bash
|
```bash
|
||||||
ctr task rm busybox-container
|
ctr containers ls
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
CONTAINER IMAGE RUNTIME
|
||||||
|
busybox-container docker.io/library/busybox:latest io.containerd.runc.v2
|
||||||
|
```
|
||||||
|
|
||||||
|
But there is not info about the status, we can see it by reviewing tasks (hope that I will write something about it).
|
||||||
|
```bash
|
||||||
|
ctr task ls
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
TASK PID STATUS
|
||||||
|
busybox-container 2862580 STOPPED
|
||||||
|
```
|
||||||
|
|
||||||
|
As we can see our container is in stoped state (because command successfully executed and container stopped).
|
||||||
|
|
||||||
|
Now, lets clean-up our workspace and go to the next section.
|
||||||
|
```bash
|
||||||
|
ctr containers rm busybox-container
|
||||||
|
```
|
||||||
|
|
||||||
|
We can check that list of containers and tasks should be empty
|
||||||
|
```bash
|
||||||
|
{
|
||||||
|
ctr task ls
|
||||||
|
ctr containers ls
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
We should receive an empty output
|
||||||
|
|
||||||
Next: [Kubelet](./02-kubelet.md)
|
Next: [Kubelet](./02-kubelet.md)
|
||||||
|
|
|
@ -1,24 +1,30 @@
|
||||||
# Kubelet
|
# Kubelet
|
||||||
|
|
||||||

|
In this part of tutorial we will configure (it is better to say that we will partially configure) kubelet on our server.
|
||||||
|
|
||||||
In this part of tutorial we will configure (let's say partially) configure kubelet on our server.
|
But before we will configure kubelet, lets talk a bit about it.
|
||||||
|
|
||||||
As mentioned in the official kubernetes documentation:
|
As mentioned in the official kubernetes documentation:
|
||||||
> An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
|
> An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
|
||||||
> The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
|
> The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy.
|
||||||
|
|
||||||
So, lets set up kubelet and run some pod.
|

|
||||||
|
|
||||||
First of all we need to download kubelet binary
|
As we can see, in this section we will work with the next layer of kubernetes components (if I can say so).
|
||||||
|
Previously we worked with containers, but on this step we will work with other afteraction kubernetes has - pod.
|
||||||
|
|
||||||
|
As you remember at the end, kubernetes usually start pods. So now we will try to create it. But it a bit not usual way, instead of using kubernetes api (which we didn't configured yet), we will create pods with the usage of kubelet only.
|
||||||
|
To do that we will use the [static pods](https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/) functionality.
|
||||||
|
|
||||||
|
So, lets begin.
|
||||||
|
|
||||||
|
First of all we need to download kubelet.
|
||||||
```bash
|
```bash
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
|
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
|
||||||
```
|
```
|
||||||
|
|
||||||
Make file exacutable and move to the bin folder
|
After download process complete, muve kubelet binaries to the proper folder
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
chmod +x kubelet
|
chmod +x kubelet
|
||||||
|
@ -26,8 +32,7 @@ Make file exacutable and move to the bin folder
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
And the last part when configuring kubelet - create service to run kubelet.
|
As kubelet is a service which is used to manage pods running on the node, we need to configure that service
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
||||||
[Unit]
|
[Unit]
|
||||||
|
@ -58,7 +63,7 @@ The main configuration parameters here:
|
||||||
- --file-check-frequency - how often kubelet will check for the updates of static pods
|
- --file-check-frequency - how often kubelet will check for the updates of static pods
|
||||||
- --pod-manifest-path - directory where we will place our pod manifest files
|
- --pod-manifest-path - directory where we will place our pod manifest files
|
||||||
|
|
||||||
Now, let's start our service
|
After our service configured, we can start it
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
|
@ -67,12 +72,12 @@ Now, let's start our service
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
And check service status
|
To ensure that our service successfully started, run
|
||||||
```bash
|
```bash
|
||||||
sudo systemctl status kubelet
|
sudo systemctl status kubelet
|
||||||
```
|
```
|
||||||
|
|
||||||
Output:
|
The output should be similar to
|
||||||
```
|
```
|
||||||
● kubelet.service - kubelet: The Kubernetes Node Agent
|
● kubelet.service - kubelet: The Kubernetes Node Agent
|
||||||
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
|
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
|
||||||
|
@ -87,10 +92,8 @@ Output:
|
||||||
```
|
```
|
||||||
|
|
||||||
After kubelet service up and running, we can start creating our pods.
|
After kubelet service up and running, we can start creating our pods.
|
||||||
To do that we will use static pods feature of kubelet.
|
|
||||||
> Static Pods are managed directly by the kubelet daemon on a specific node, without the API server observing them.
|
|
||||||
|
|
||||||
Before we will create static pod manifests, we need to create folders where we will place our pods (as we can see from kibelet configuration, it sould be /etc/kubernetes/manifests)
|
Before we will create static pod manifests, we need to create folders where we will place our pods (same as we configured in kubelet)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
|
@ -99,8 +102,7 @@ mkdir /etc/kubernetes/manifests
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
After directory created, we can create static with busybox inside
|
After directory created, we can create static pod with busybox container inside
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF> /etc/kubernetes/manifests/static-pod.yml
|
cat <<EOF> /etc/kubernetes/manifests/static-pod.yml
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
|
@ -118,27 +120,68 @@ spec:
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
We can check if containerd runned new container
|
Now lets use the ctr tool we already know to list the containers created
|
||||||
```bash
|
```bash
|
||||||
ctr tasks ls
|
ctr containers ls
|
||||||
```
|
```
|
||||||
|
|
||||||
Output:
|
Output:
|
||||||
```bash
|
```bash
|
||||||
TASK PID STATUS
|
CONTAINER IMAGE RUNTIME
|
||||||
```
|
```
|
||||||
|
|
||||||
Looks like containerd didn't created any containrs yet?
|
Looks like containerd didn't created any containrs yet?
|
||||||
Of course it may be true, but baed on the output of ctr command we can't answer that question. It is not true (of course it may be true, but based on the output of the ctr command we can't confirm that ////more about that here)
|
Of course it may be true, but based on the output of ctr command we can't confirm that.
|
||||||
|
|
||||||
To see containers managed by kubelet lets install [crictl](http://google.com/crictl).
|
Containerd has namespace feature. Namespace is a mechanism used to provide isolation and separation between different sets of resources.
|
||||||
Download binaries
|
|
||||||
|
We can check containerd namespaces by running
|
||||||
|
```bash
|
||||||
|
ctr namespace ls
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
NAME LABELS
|
||||||
|
default
|
||||||
|
k8s.io
|
||||||
|
```
|
||||||
|
|
||||||
|
Containers created by kubelet located in the k8s.io namespace, to see them run
|
||||||
|
```bash
|
||||||
|
ctr --namespace k8s.io containers ls
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
CONTAINER IMAGE RUNTIME
|
||||||
|
33d2725dd9f343de6dd0d4b77161a532ae17d410b266efb31862605453eb54e0 k8s.gcr.io/pause:3.2 io.containerd.runtime.v1.linux
|
||||||
|
e75eb4ac89f32ccfb6dc6e894cb6b4429b6dc70eba832bc6dea4dc69b03dec6e sha256:af2c3e96bcf1a80da1d9b57ec0adc29f73f773a4a115344b7e06aec982157a33 io.containerd.runtime.v1.linux
|
||||||
|
```
|
||||||
|
|
||||||
|
And to get container status we can call
|
||||||
|
```bash
|
||||||
|
ctr --namespace k8s.io task ls
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
TASK PID STATUS
|
||||||
|
e75eb4ac89f32ccfb6dc6e894cb6b4429b6dc70eba832bc6dea4dc69b03dec6e 1524 RUNNING
|
||||||
|
33d2725dd9f343de6dd0d4b77161a532ae17d410b266efb31862605453eb54e0 1472 RUNNING
|
||||||
|
```
|
||||||
|
|
||||||
|
But it is not what we expected, we expected to see container named busybox. Of course there is no majic, all anformation about pod to which this container belongs to, kubernetes containername, etc are located in the metadata on the container, and can be easilly extracted with the usage of other crt command (like this - ctr --namespace k8s.io containers info a597ed1f8dee6a43d398173754fd028c7ac481ee27e09ad4642187ed408814b4). but we want to see it in a bit more readable format, this is why, we will use different tool - [crictl](http://google.com/crictl).
|
||||||
|
|
||||||
|
In Comparison to the ctr (which can work with containerd only), crictl is a tool which interracts with any CRI compliant runtime, containerd is only runtime we use in this tutorial. Also, cri ctl provide information in more "kubernetes" way (i mean it can show pods and containers with names like in kubernetes).
|
||||||
|
|
||||||
|
So, lets download crictl binaries
|
||||||
```bash
|
```bash
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz
|
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz
|
||||||
```
|
```
|
||||||
|
|
||||||
Install (move to bin folder)
|
After download process complete, move crictl binaries to the proper folder
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
tar -xvf crictl-v1.21.0-linux-amd64.tar.gz
|
tar -xvf crictl-v1.21.0-linux-amd64.tar.gz
|
||||||
|
@ -146,7 +189,8 @@ Install (move to bin folder)
|
||||||
sudo mv crictl /usr/local/bin/
|
sudo mv crictl /usr/local/bin/
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
And configure a bit
|
|
||||||
|
And configure it a bit
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF> /etc/crictl.yaml
|
cat <<EOF> /etc/crictl.yaml
|
||||||
runtime-endpoint: unix:///run/containerd/containerd.sock
|
runtime-endpoint: unix:///run/containerd/containerd.sock
|
||||||
|
@ -156,6 +200,8 @@ debug: false
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
|
As already mentioned, crictl can be configured to use any CRI complient runtime, in our case we configured containerd (by providing containerd socket path).
|
||||||
|
|
||||||
And we can finaly get the list of pods running on our server
|
And we can finaly get the list of pods running on our server
|
||||||
```bash
|
```bash
|
||||||
crictl pods
|
crictl pods
|
||||||
|
@ -196,7 +242,7 @@ Hello from static pod
|
||||||
|
|
||||||
Great, now we can run pods on our server.
|
Great, now we can run pods on our server.
|
||||||
|
|
||||||
Before we will continue, remove our pods running
|
Now, lets clean up our worspace and continue with the next section
|
||||||
```bash
|
```bash
|
||||||
rm /etc/kubernetes/manifests/static-pod.yml
|
rm /etc/kubernetes/manifests/static-pod.yml
|
||||||
```
|
```
|
||||||
|
|
|
@ -1,10 +1,10 @@
|
||||||
# Pod networking
|
# Pod networking
|
||||||
|
|
||||||
In this part of tutorial, we will have closer look at the container networking
|
Now, we know how kubelet runs containers and we know how to run pod without other kubernetes cluster components.
|
||||||
And lets start with nginx runned inside container.
|
|
||||||
|
|
||||||
Create manifest for nginx static pod
|
Let's experiment with static pod a bit.
|
||||||
|
|
||||||
|
We will create static pod, but this time we will run nginx, instead of busybox
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF> /etc/kubernetes/manifests/static-nginx.yml
|
cat <<EOF> /etc/kubernetes/manifests/static-nginx.yml
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
|
@ -68,8 +68,6 @@ Commercial support is available at
|
||||||
```
|
```
|
||||||
|
|
||||||
Now, lets try to create 1 more nginx container.
|
Now, lets try to create 1 more nginx container.
|
||||||
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF> /etc/kubernetes/manifests/static-nginx-2.yml
|
cat <<EOF> /etc/kubernetes/manifests/static-nginx-2.yml
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
|
@ -130,7 +128,8 @@ nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
|
||||||
As we can see, the reason of the exit state - adress already in use.
|
As we can see, the reason of the exit state - adress already in use.
|
||||||
Our address already in use by our other container.
|
Our address already in use by our other container.
|
||||||
|
|
||||||
We received this error because we run two pods with configuration
|
We received this error because we run two pods which require an access to the same port on our server.
|
||||||
|
This was done by specifying
|
||||||
```
|
```
|
||||||
...
|
...
|
||||||
spec:
|
spec:
|
||||||
|
@ -138,9 +137,9 @@ spec:
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
As we can see our pod are runned in host network.
|
This option runs our container on our host without any network isolation (almost the same as running two nginx without on the same host without containers)
|
||||||
Lets try to fix this by updating our manifests to run containers in not host network.
|
|
||||||
|
|
||||||
|
Now we will try to update our pod manifests to run our containers in separate network "namespaces"
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
cat <<EOF> /etc/kubernetes/manifests/static-nginx.yml
|
cat <<EOF> /etc/kubernetes/manifests/static-nginx.yml
|
||||||
|
@ -171,8 +170,9 @@ EOF
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
And check our pods once again
|
As you can see we simply removed hostNetwork: true configuration option.
|
||||||
|
|
||||||
|
So, lets check what we have
|
||||||
```bash
|
```bash
|
||||||
crictl pods
|
crictl pods
|
||||||
```
|
```
|
||||||
|
@ -201,9 +201,9 @@ As we can see cni plugin is not initialized. But what is cni plugin.
|
||||||
|
|
||||||
> A CNI plugin is a binary executable that is responsible for configuring the network interfaces and routes of a container or pod. It communicates with the container runtime (such as Docker or CRI-O) to set up networking for the container or pod.
|
> A CNI plugin is a binary executable that is responsible for configuring the network interfaces and routes of a container or pod. It communicates with the container runtime (such as Docker or CRI-O) to set up networking for the container or pod.
|
||||||
|
|
||||||
As we can see kubelet can't configure network for pod by himself, same as with containers, to configure network kubelet use some 'protocol' to communicate with 'someone' who can configure networ.
|
As we can see kubelet can't configure network for pod by himself (or with the help of containerd). Same as with containers, to configure network kubelet use some 'protocol' to communicate with 'someone' who can configure networ.
|
||||||
|
|
||||||
Now, we will configure the cni plugin 1for our instalation.
|
Now, we will configure the cni plugin for our kubelet.
|
||||||
|
|
||||||
First of all we need to download that plugin
|
First of all we need to download that plugin
|
||||||
|
|
||||||
|
@ -262,8 +262,11 @@ EOF
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
And finaly we need to update our kubelet config (add network-plugin configuration option)
|
Of course all configuration options here important, but I want to highlight 2 of them:
|
||||||
|
- ranges - information about subnets from shich ip addresses will be assigned for our pods
|
||||||
|
- routes - information on how to route trafic between nodes, as we have single node kubernetes cluster the configuration is very easy
|
||||||
|
|
||||||
|
Update our kubelet config (add network-plugin configuration option)
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
||||||
[Unit]
|
[Unit]
|
||||||
|
@ -290,8 +293,7 @@ WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
Of course restart it
|
After kubelet reconfigured, we can restart it
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
|
@ -300,7 +302,6 @@ Of course restart it
|
||||||
```
|
```
|
||||||
|
|
||||||
And check kubelet status
|
And check kubelet status
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo systemctl status kubelet
|
sudo systemctl status kubelet
|
||||||
```
|
```
|
||||||
|
@ -318,8 +319,7 @@ Output:
|
||||||
└─86730 /usr/local/bin/kubelet --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --image-pull-progress-deadline=2m --file-che>
|
└─86730 /usr/local/bin/kubelet --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --image-pull-progress-deadline=2m --file-che>
|
||||||
```
|
```
|
||||||
|
|
||||||
Now, when we fixed everything, lets ckeck if our pods are in running state
|
Now, after all fixes applyed and we have working kubelet, we can check wheather our pods created
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
crictl pods
|
crictl pods
|
||||||
```
|
```
|
||||||
|
@ -347,7 +347,7 @@ CONTAINER IMAGE CREATED STATE
|
||||||
They are also in running state
|
They are also in running state
|
||||||
|
|
||||||
On this step if we will try to curl localhost nothing will happen.
|
On this step if we will try to curl localhost nothing will happen.
|
||||||
Our pods are runned in separate network namespaces, and each pod has own ip address.
|
Our pods are runned in separate network namespaces, and each pod has its own ip address.
|
||||||
We need to define it.
|
We need to define it.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -370,9 +370,9 @@ Output:
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
During the plugin configuration we remember that we configure the subnet pod our pods to be 10.240.1.0/24.
|
During the plugin configuration we remember that we configure the subnet pod our pods to be 10.240.1.0/24. So, the container received its IP from the range specified, in my case it was 10.240.1.1.
|
||||||
Now, we can curl our container.
|
|
||||||
|
|
||||||
|
So, lets try to curl our container.
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
PID=$(crictl pods --label app=static-nginx-2 -q)
|
PID=$(crictl pods --label app=static-nginx-2 -q)
|
||||||
|
@ -409,7 +409,7 @@ Commercial support is available at
|
||||||
</html>
|
</html>
|
||||||
```
|
```
|
||||||
|
|
||||||
As we can see we successfully reached out container.
|
As we can see we successfully reached out container from our host.
|
||||||
|
|
||||||
But we remember that cni plugin also responsible to configure communication between containers.
|
But we remember that cni plugin also responsible to configure communication between containers.
|
||||||
Lets check
|
Lets check
|
||||||
|
@ -493,7 +493,9 @@ written to stdout
|
||||||
|
|
||||||
As we can see we successfully reached our container from busybox.
|
As we can see we successfully reached our container from busybox.
|
||||||
|
|
||||||
Now, we will clean up workplace
|
In this section we configured CNI plugin for our intallation and now we can run pods which can communicate with each other over the network.
|
||||||
|
|
||||||
|
In nest section we will procede with the kubernetes cluster configuration, but before, we need to clean up workspace.
|
||||||
```bash
|
```bash
|
||||||
rm /etc/kubernetes/manifests/static-*
|
rm /etc/kubernetes/manifests/static-*
|
||||||
```
|
```
|
||||||
|
|
|
@ -1,27 +1,38 @@
|
||||||
# ETCD
|
# ETCD
|
||||||
|
|
||||||
At this point we already know that we can run pods even withour API server. But current aproach os not very confortable to use, to create pod we need to place some manifest in some place. it is not very comfortable to manage. Now we will start our jorney of configuring "real" kubernetes. And of cource all our manifests should be stored somewhere.
|
At this point we already know that we can run pods even withour API server. But current aproach is not very confortable to use, to create pod we need to place some manifest in some place. It is not very comfortable to manage. Now we will start our jorney of configuring "real" (more real than current, because current doesn't look like kubernetes at all) kubernetes. And of course we need to start with the storage.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
For kubernetes (at least for original one it I can say so) we need to configura database called ETCD.
|
For kubernetes (at least for original one if I can say so) we need to configura database called [etcd](https://etcd.io/).
|
||||||
|
|
||||||
To configure db (and other kubennetes components in future) we will need some tools to configure certificates.
|
>etcd is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines. It gracefully handles leader elections during network partitions and can tolerate machine failure, even in the leader node.
|
||||||
|
|
||||||
|
Our etcd will be configured as single node database with authentication (by useage of client cert file).
|
||||||
|
|
||||||
|
So, lets start.
|
||||||
|
|
||||||
|
As I already said, communication with our etcd cluster will be secured, it means that we need to generate some keys, to encrypt all the trafic.
|
||||||
|
To do so, we need to download tools which may help us to generate certificates
|
||||||
|
```bash
|
||||||
|
wget -q --show-progress --https-only --timestamping \
|
||||||
|
https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64 \
|
||||||
|
https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64
|
||||||
|
```
|
||||||
|
|
||||||
|
And install
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
wget -q --show-progress --https-only --timestamping \
|
mv cfssl_1.4.1_linux_amd64 cfssl
|
||||||
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \
|
mv cfssljson_1.4.1_linux_amd64 cfssljson
|
||||||
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
|
chmod +x cfssl cfssljson
|
||||||
chmod +x cfssl cfssljson
|
sudo mv cfssl cfssljson /usr/local/bin/
|
||||||
sudo mv cfssl cfssljson /usr/local/bin/
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
And now lets begin our etcd configuration journey.
|
After the tools installed successfully, we need to generate ca certificate.
|
||||||
|
|
||||||
First of all we will create ca certificate file.
|
|
||||||
|
|
||||||
|
A ca (Certificate Authority) certificate, also known as a root certificate or a trusted root certificate, is a digital certificate that is used to verify the authenticity of other certificates.
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
cat > ca-config.json <<EOF
|
cat > ca-config.json <<EOF
|
||||||
|
@ -70,8 +81,9 @@ ca.csr
|
||||||
ca.pem
|
ca.pem
|
||||||
```
|
```
|
||||||
|
|
||||||
Now, we need to create certificate which will be used by ETCD (not only ETCD, but about that in next parts) as server cert.
|
Now, we can create certificate files signed by our ca file.
|
||||||
|
|
||||||
|
> to simplify our kubernetes deployment, we will use this certificate for other kubernetes components as well, that is why we will add some extra configs (like KUBERNETES_HOST_NAME to it)
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
HOST_NAME=$(hostname -a)
|
HOST_NAME=$(hostname -a)
|
||||||
|
@ -113,14 +125,13 @@ kubernetes-key.pem
|
||||||
kubernetes.pem
|
kubernetes.pem
|
||||||
```
|
```
|
||||||
|
|
||||||
Now, when we have all required certs, we need to download etcd
|
Now, we have all required certificates, so, lets download etcd
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
"https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
|
"https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
|
||||||
```
|
```
|
||||||
|
|
||||||
Decompres and install it to the proper folder
|
After donload complete, we can move etcd binaries to proper folders
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
|
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
|
||||||
|
@ -128,8 +139,7 @@ Decompres and install it to the proper folder
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
When etcd is installed, we need to move our generated certificates to the proper folder
|
Now, we can start wioth the configurations of the etcd service. First of all, we need to discribute previuosly generated certificates to the proper folder
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
sudo mkdir -p /etc/etcd /var/lib/etcd
|
sudo mkdir -p /etc/etcd /var/lib/etcd
|
||||||
|
@ -141,7 +151,6 @@ When etcd is installed, we need to move our generated certificates to the proper
|
||||||
```
|
```
|
||||||
|
|
||||||
Create etcd service configuration file
|
Create etcd service configuration file
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
|
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
|
||||||
[Unit]
|
[Unit]
|
||||||
|
@ -174,25 +183,24 @@ Configuration options specified:
|
||||||
- key-file - path to the SSL/TLS private key file that corresponds to the SSL/TLS certificate presented by the etcd server during the TLS handshake
|
- key-file - path to the SSL/TLS private key file that corresponds to the SSL/TLS certificate presented by the etcd server during the TLS handshake
|
||||||
- trusted-ca-file - path to the ca file which will be used by etcd to validate client certificate
|
- trusted-ca-file - path to the ca file which will be used by etcd to validate client certificate
|
||||||
- listen-client-urls - specifies the network addresses on which the etcd server listens for client requests
|
- listen-client-urls - specifies the network addresses on which the etcd server listens for client requests
|
||||||
- specifies the network addresses that the etcd server advertises to clients for connecting to the server
|
- advertise-client-urls - specifies the network addresses that the etcd server advertises to clients for connecting to the server
|
||||||
- data-dir - directory where etcd stores its data, including the key-value pairs in the etcd key-value store, snapshots, and transaction logs
|
- data-dir - directory where etcd stores its data, including the key-value pairs in the etcd key-value store, snapshots, and transaction logs
|
||||||
|
|
||||||
And finally we need to run our etcd service
|
And finally we need to run our etcd service
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
sudo systemctl enable etcd
|
sudo systemctl enable etcd
|
||||||
sudo systemctl start etcd
|
sudo systemctl start etcd
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
And ensure that etcd is up and running
|
To ensure that our service successfully started, run
|
||||||
```bash
|
```bash
|
||||||
systemctl status etcd
|
systemctl status etcd
|
||||||
```
|
```
|
||||||
|
|
||||||
Output:
|
The output should be similar to
|
||||||
```
|
```
|
||||||
● etcd.service - etcd
|
● etcd.service - etcd
|
||||||
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
|
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
|
||||||
|
@ -206,7 +214,7 @@ Output:
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
When etcd is up and running we can check wheather we can connact to it.
|
Now, when etcd is up and running, we can check wheather we can communicate with it
|
||||||
```
|
```
|
||||||
sudo ETCDCTL_API=3 etcdctl member list \
|
sudo ETCDCTL_API=3 etcdctl member list \
|
||||||
--endpoints=https://127.0.0.1:2379 \
|
--endpoints=https://127.0.0.1:2379 \
|
||||||
|
@ -216,9 +224,10 @@ sudo ETCDCTL_API=3 etcdctl member list \
|
||||||
```
|
```
|
||||||
|
|
||||||
Output:
|
Output:
|
||||||
Результат:
|
|
||||||
```bash
|
```bash
|
||||||
8e9e05c52164694d, started, etcd, http://localhost:2380, https://127.0.0.1:2379, false
|
8e9e05c52164694d, started, etcd, http://localhost:2380, https://127.0.0.1:2379, false
|
||||||
```
|
```
|
||||||
|
|
||||||
|
As you can see, to communicate with our etcd service, we specified cert and key file, this the the same file we used to configure etcd, it is only to simplity our deployment, in real life, we can use different certificate which is signed by the same ca file.
|
||||||
|
|
||||||
Next: [Api Server](./05-apiserver.md)
|
Next: [Api Server](./05-apiserver.md)
|
|
@ -1,8 +1,15 @@
|
||||||
# апі сервер
|
# Api Server
|
||||||
|
|
||||||
|
In this section we will configure kubernetes API server.
|
||||||
|
> The Kubernetes API server validates and configures data for the api objects which include pods, services, replicationcontrollers, and others. The API Server services REST operations and provides the frontend to the cluster's shared state through which all other components interact.
|
||||||
|
|
||||||
|
As you can see from the description adpi server is central (not main) component of kubernetes cluster.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
так як ми уже налаштували бд - можна починати налаштовувати і сам куб апі сервер, будемо пробувати щось четапити
|
## certificates
|
||||||
|
|
||||||
|
Before we begin with configuration of API server, we need to create certificates for kubernetes that will be used to sign service account tokens.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
|
@ -34,42 +41,32 @@ cfssl gencert \
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Now, we need to distbibute certificates to the api server configuration folder
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
cat > admin-csr.json <<EOF
|
mkdir /var/lib/kubernetes/
|
||||||
|
sudo cp \
|
||||||
|
ca.pem \
|
||||||
|
kubernetes.pem kubernetes-key.pem \
|
||||||
|
service-account-key.pem service-account.pem \
|
||||||
|
/var/lib/kubernetes/
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
As you can see, in addition to the generated service-account certificate file, we also distributed certificate generated in [previous](./04-etcd.md) section. We will use that certificate for communication between
|
||||||
|
- api server and etcd
|
||||||
|
- as certificate when comunication with api server
|
||||||
|
|
||||||
|
Also, we will use ca file to validate certificate files of the other components wo comminucate with api server.
|
||||||
|
|
||||||
|
## data encryption
|
||||||
|
|
||||||
|
Also, we will configure api server to encrypt data sensitive before saving it to the etcd database. To do that we need to create encryption config file.
|
||||||
|
```bash
|
||||||
{
|
{
|
||||||
"CN": "admin",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "system:masters",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-profile=kubernetes \
|
|
||||||
admin-csr.json | cfssljson -bare admin
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
|
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
cat > /var/lib/kubernetes/encryption-config.yaml <<EOF
|
||||||
cat > encryption-config.yaml <<EOF
|
|
||||||
kind: EncryptionConfig
|
kind: EncryptionConfig
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
resources:
|
resources:
|
||||||
|
@ -82,45 +79,28 @@ resources:
|
||||||
secret: ${ENCRYPTION_KEY}
|
secret: ${ENCRYPTION_KEY}
|
||||||
- identity: {}
|
- identity: {}
|
||||||
EOF
|
EOF
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
This config days kubernetes to encrypt secrets when storing it in etcd with the usage of aescbc encryption provider.
|
||||||
|
|
||||||
```
|
## service configuration
|
||||||
sudo mkdir -p /etc/kubernetes/config
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
Now, when all required configuration/certificate files created and distributed to the proper folders, we can downlad binaries and enable api server as service.
|
||||||
|
|
||||||
|
First of all we need to download and install api server binaries
|
||||||
|
|
||||||
|
```bash
|
||||||
|
{
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver"
|
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver"
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
chmod +x kube-apiserver
|
chmod +x kube-apiserver
|
||||||
sudo mv kube-apiserver /usr/local/bin/
|
sudo mv kube-apiserver /usr/local/bin/
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
And create service configuration file
|
||||||
{
|
|
||||||
sudo mkdir -p /var/lib/kubernetes/
|
|
||||||
|
|
||||||
sudo cp \
|
|
||||||
ca.pem \
|
|
||||||
kubernetes.pem kubernetes-key.pem \
|
|
||||||
encryption-config.yaml \
|
|
||||||
service-account-key.pem service-account.pem \
|
|
||||||
/var/lib/kubernetes/
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo mkdir -p /etc/kubernetes/config
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ADVERTISE_IP=$(ip addr show | grep 'inet ' | grep -v '127.0.0.1' | awk '{print $2}' | cut -f1 -d'/')
|
|
||||||
|
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
|
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubernetes API Server
|
Description=Kubernetes API Server
|
||||||
|
@ -128,7 +108,6 @@ Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
ExecStart=/usr/local/bin/kube-apiserver \\
|
ExecStart=/usr/local/bin/kube-apiserver \\
|
||||||
--advertise-address='${ADVERTISE_IP}' \\
|
|
||||||
--allow-privileged='true' \\
|
--allow-privileged='true' \\
|
||||||
--audit-log-maxage='30' \\
|
--audit-log-maxage='30' \\
|
||||||
--audit-log-maxbackup='3' \\
|
--audit-log-maxbackup='3' \\
|
||||||
|
@ -165,18 +144,25 @@ WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Configuration options I want to highlight:
|
||||||
|
- client-ca-file - certificate file which will be used to validate client certificates and authenticate users
|
||||||
|
|
||||||
|
Now, when api-server service is configured, we can start it
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
sudo systemctl enable kube-apiserver
|
sudo systemctl enable kube-apiserver
|
||||||
sudo systemctl restart kube-apiserver
|
sudo systemctl start kube-apiserver
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
And check service status
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo systemctl status kube-apiserver
|
sudo systemctl status kube-apiserver
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
● kube-apiserver.service - Kubernetes API Server
|
● kube-apiserver.service - Kubernetes API Server
|
||||||
Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled)
|
Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled)
|
||||||
|
@ -190,6 +176,10 @@ sudo systemctl status kube-apiserver
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## communication with api server
|
||||||
|
|
||||||
|
Now, when our server is up and running, we want to communicate with it. To do that we will use cubectl tool. So lets download and install it
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
|
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
|
||||||
|
@ -197,31 +187,72 @@ wget -q --show-progress --https-only --timestamping \
|
||||||
&& sudo mv kubectl /usr/local/bin/
|
&& sudo mv kubectl /usr/local/bin/
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
As our server is configured to use RBAC authorization, we need to authirize to our server in somehow. To do that, we will generate certificate file which will be signed by ca cert, and have "admin" CN property.
|
||||||
|
|
||||||
|
```bash
|
||||||
{
|
{
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
cat > admin-csr.json <<EOF
|
||||||
--certificate-authority=ca.pem \
|
{
|
||||||
--embed-certs=true \
|
"CN": "admin",
|
||||||
--server=https://127.0.0.1:6443
|
"key": {
|
||||||
|
"algo": "rsa",
|
||||||
|
"size": 2048
|
||||||
|
},
|
||||||
|
"names": [
|
||||||
|
{
|
||||||
|
"C": "US",
|
||||||
|
"L": "Portland",
|
||||||
|
"O": "system:masters",
|
||||||
|
"OU": "Kubernetes The Hard Way",
|
||||||
|
"ST": "Oregon"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
kubectl config set-credentials admin \
|
cfssl gencert \
|
||||||
--client-certificate=admin.pem \
|
-ca=ca.pem \
|
||||||
--client-key=admin-key.pem \
|
-ca-key=ca-key.pem \
|
||||||
--embed-certs=true
|
-config=ca-config.json \
|
||||||
|
-profile=kubernetes \
|
||||||
kubectl config set-context default \
|
admin-csr.json | cfssljson -bare admin
|
||||||
--cluster=kubernetes-the-hard-way \
|
|
||||||
--user=admin
|
|
||||||
|
|
||||||
kubectl config use-context default
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Now, when our certificate file generated, we can use it in kubectl. To do that we will update default kubectl config file (actually we will create it) to use the proper certs and connection options.
|
||||||
```bash
|
```bash
|
||||||
kubectl version --kubeconfig=admin.kubeconfig
|
{
|
||||||
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
|
--certificate-authority=ca.pem \
|
||||||
|
--embed-certs=true \
|
||||||
|
--server=https://127.0.0.1:6443
|
||||||
|
|
||||||
|
kubectl config set-credentials admin \
|
||||||
|
--client-certificate=admin.pem \
|
||||||
|
--client-key=admin-key.pem \
|
||||||
|
--embed-certs=true
|
||||||
|
|
||||||
|
kubectl config set-context default \
|
||||||
|
--cluster=kubernetes-the-hard-way \
|
||||||
|
--user=admin
|
||||||
|
|
||||||
|
kubectl config use-context default
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
тепер можна починати створбвати поди і все має чотко працювати
|
Now, we should be able to receive our cluster and kubeclt info
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl version
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
|
||||||
|
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
|
||||||
|
```
|
||||||
|
|
||||||
|
As I already mentioned, api-server is central kubernetes component, which stores information about all kubernetes objects, it means that we can create pod, even when other components (kubelet, scheduler, controller manager) not configured
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
|
@ -249,22 +280,34 @@ metadata:
|
||||||
automountServiceAccountToken: false
|
automountServiceAccountToken: false
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
kubectl apply -f sa.yaml --kubeconfig=admin.kubeconfig
|
kubectl apply -f sa.yaml
|
||||||
kubectl apply -f pod.yaml --kubeconfig=admin.kubeconfig
|
kubectl apply -f pod.yaml
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Note: as you can see, in addition to the pod, we create service account associated with our pod. This step is needed as we have now default service account created in default namespace (service account controller is responsible to create it, but we didn't configure controller manager yet).
|
||||||
|
|
||||||
|
To check pod status run
|
||||||
```bash
|
```bash
|
||||||
kubectl get pod --kubeconfig=admin.kubeconfig
|
kubectl get pod
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
hello-world 0/1 Pending 0 29s
|
hello-world 0/1 Pending 0 29s
|
||||||
```
|
```
|
||||||
|
|
||||||
такс под є але він у статусі пендінг, якесь неподобство
|
As expected we received pod in pending state, because we have now kubelet configured to run pods created in API server.
|
||||||
в дійсності, зоч у нас кублєт і є але він про сервер нічого незнає а сервер про нього
|
|
||||||
потрібно цю проблємку вирішити
|
|
||||||
|
|
||||||
Next: [Apiserver - Kubelet integration](./06-apiserver-kubelet.md)
|
To ensure we can check it
|
||||||
|
```bash
|
||||||
|
kubectl get nodes
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
NAME STATUS ROLES AGE VERSION
|
||||||
|
```
|
||||||
|
|
||||||
|
Next: [Apiserver - Kubelet integration](./06-apiserver-kubelet.md)
|
||||||
|
|
|
@ -1,7 +1,13 @@
|
||||||
# Kubelet
|
# Apiserver - Kubelet integration
|
||||||
|
|
||||||
|
In this section, we will configure kubelet, to run not only static pods but also pods, created in API server.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
## certificates
|
||||||
|
|
||||||
|
Again we will start this part with the creation of the certificates which will be used by kubelet to communicate with api server, and also when api server will communicate with kubelet.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
HOST_NAME=$(hostname -a)
|
HOST_NAME=$(hostname -a)
|
||||||
|
@ -34,7 +40,32 @@ cfssl gencert \
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
це конфігураційний файл кублєта - розказує кублєту як ходити на апі сервер
|
Create certificates:
|
||||||
|
```
|
||||||
|
kubelet.csr
|
||||||
|
kubelet-key.pem
|
||||||
|
kubelet.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
The most interesting configuration options:
|
||||||
|
- cn(common name) - value api server will use as a client name during authorization
|
||||||
|
- o(organozation) - user group api server will use during authorization
|
||||||
|
|
||||||
|
We specified "system:nodes" in the organization. It says api server that the client who uses which certificate belongs to the system:nodes group.
|
||||||
|
|
||||||
|
Now we need to distribute certificates generated.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
{
|
||||||
|
sudo cp kubelet-key.pem kubelet.pem /var/lib/kubelet/
|
||||||
|
sudo cp ca.pem /var/lib/kubernetes/
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## service configuration
|
||||||
|
|
||||||
|
After certificates configured and distributed, we need to prepare configuration files for kubelet.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
HOST_NAME=$(hostname -a)
|
HOST_NAME=$(hostname -a)
|
||||||
|
@ -59,14 +90,15 @@ kubectl config use-context default --kubeconfig=kubelet.kubeconfig
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
We created kubernetes configuration file, which says kubelet where api server is configured and chich certificates to use communicating with it
|
||||||
|
|
||||||
|
And now, move all our configuration settings to the proper folders
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
sudo cp kubelet.kubeconfig /var/lib/kubelet/kubeconfig
|
||||||
sudo cp kubelet-key.pem kubelet.pem /var/lib/kubelet/
|
|
||||||
sudo cp kubelet.kubeconfig /var/lib/kubelet/kubeconfig
|
|
||||||
sudo cp ca.pem /var/lib/kubernetes/
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Alsom we need to create KubeletConfiguration
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
||||||
kind: KubeletConfiguration
|
kind: KubeletConfiguration
|
||||||
|
@ -91,6 +123,13 @@ tlsPrivateKeyFile: "/var/lib/kubelet/kubelet-key.pem"
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Configuration options I want to highlight:
|
||||||
|
- podCIDR - the pod network cidr, the same as we configured previously
|
||||||
|
- tls configuration - certificate files which will be used by kubelet when api server connects to it
|
||||||
|
- authentication.webgook.enable - means that to authorize requests kubelet will ask api server
|
||||||
|
- cluster dns - the IP in which cluster DNS server will be hosted (more on this later)
|
||||||
|
|
||||||
|
And the last step - we need to update service configuration file
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
||||||
[Unit]
|
[Unit]
|
||||||
|
@ -107,7 +146,7 @@ ExecStart=/usr/local/bin/kubelet \\
|
||||||
--image-pull-progress-deadline=2m \\
|
--image-pull-progress-deadline=2m \\
|
||||||
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
||||||
--network-plugin=cni \\
|
--network-plugin=cni \\
|
||||||
--register-node=true
|
--register-node=true \\
|
||||||
--v=2
|
--v=2
|
||||||
Restart=on-failure
|
Restart=on-failure
|
||||||
RestartSec=5
|
RestartSec=5
|
||||||
|
@ -117,6 +156,8 @@ WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
|
And reload it
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
|
@ -125,57 +166,67 @@ EOF
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## verification
|
||||||
|
|
||||||
|
And check service status
|
||||||
```bash
|
```bash
|
||||||
sudo systemctl status kubelet
|
sudo systemctl status kubelet
|
||||||
```
|
```
|
||||||
|
|
||||||
такс, ну і тепер момент істини, потрібно розібратись чи з'явились у нас ноди у кластері
|
Output:
|
||||||
|
```
|
||||||
|
● kubelet.service - Kubernetes Kubelet
|
||||||
|
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
|
||||||
|
Active: active (running) since Thu 2023-05-04 14:37:07 UTC; 11s ago
|
||||||
|
Docs: https://github.com/kubernetes/kubernetes
|
||||||
|
Main PID: 95898 (kubelet)
|
||||||
|
Tasks: 12 (limit: 2275)
|
||||||
|
Memory: 53.8M
|
||||||
|
CGroup: /system.slice/kubelet.service
|
||||||
|
└─95898 /usr/local/bin/kubelet --config=/var/lib/kubelet/kubelet-config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.soc>
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
After our kubelet is in running state, we can check if it is registered in API server
|
||||||
```bash
|
```bash
|
||||||
kubectl get nodes
|
kubectl get nodes
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
example-server NotReady <none> 6s v1.21.0
|
example-server Ready <none> 1m2s v1.21.0
|
||||||
```
|
```
|
||||||
|
|
||||||
ох ти, раз є ноди, маж бути і контейнер
|
After our kubelet registered in API server we can check wheather our pod is in running state
|
||||||
```bash
|
```bash
|
||||||
kubectl get pod
|
kubectl get pod
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY STATUS RESTARTS AGE
|
||||||
hello-world 1/1 Running 0 8m1s
|
hello-world 1/1 Running 0 8m1s
|
||||||
```
|
```
|
||||||
|
|
||||||
пише що запущений, а що в дійсності?
|
As we can see out pod is in running state. In addition to this check we can also check if our pod is really in running state by using crictl
|
||||||
|
Pods
|
||||||
```bash
|
```bash
|
||||||
crictl pods
|
crictl pods
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
|
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
|
||||||
1719d0202a5ef 8 minutes ago Ready hello-world default 0 (default)
|
1719d0202a5ef 8 minutes ago Ready hello-world default 0 (default)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Also we can see logs from our pod
|
||||||
```bash
|
|
||||||
crictl ps
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
|
|
||||||
3f2b0a0d70377 7cfbbec8963d8 8 minutes ago Running hello-world-container 0 1719d0202a5ef
|
|
||||||
```
|
|
||||||
|
|
||||||
навіть логи можна глянути
|
|
||||||
```bash
|
```bash
|
||||||
crictl logs $(crictl ps -q)
|
crictl logs $(crictl ps -q)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
Hello, World!
|
Hello, World!
|
||||||
Hello, World!
|
Hello, World!
|
||||||
|
@ -184,24 +235,21 @@ Hello, World!
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
але так не діло логи дивитись
|
But now, lets view logs using kubectl instead of crictl. In our case it is maybe not very important, but in case of cluster with more than 1 node it is important, as crictl allows to read info only about pods on node, when kubectl (by communicating with api server) allows to read info from all nodes.
|
||||||
тепер коли ми зрозуміли що сервер наш працює і показує правду - можна користуватись тільки куб сітіель
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl logs hello-world
|
kubectl logs hello-world
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy) ( pods/log hello-world)
|
Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy) ( pods/log hello-world)
|
||||||
```
|
```
|
||||||
|
|
||||||
причина помилки відсутність всяких приколів
|
As we can see api server has no permissions to read logs from the node. This message apears, because during authorization, kubelet ask api server if the user with the name kubernetes has proper permission, but now it is not true. So let's fix this
|
||||||
давайте їх створимо
|
|
||||||
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
cat <<EOF> rbac.yml
|
cat <<EOF> node-auth.yml
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
kind: ClusterRole
|
kind: ClusterRole
|
||||||
metadata:
|
metadata:
|
||||||
|
@ -225,14 +273,16 @@ subjects:
|
||||||
name: kubernetes
|
name: kubernetes
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
kubectl apply -f rbac.yml
|
kubectl apply -f node-auth.yml
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
After our cluster role and role binding creted we can retry
|
||||||
```bash
|
```bash
|
||||||
kubectl logs hello-world
|
kubectl logs hello-world
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
Hello, World!
|
Hello, World!
|
||||||
Hello, World!
|
Hello, World!
|
||||||
|
@ -241,6 +291,22 @@ Hello, World!
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
ух ти тепер точно все працює, можна користуватись кубернетесом, але стоп у нас все ще є речі які на нашому графіку сірі давайте розбиратись
|
As you can see, we can create pods and kubelet will run that pods.
|
||||||
|
|
||||||
|
Now, we need to clean-up out workspace.
|
||||||
|
```bash
|
||||||
|
kubectl delete -f pod.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Check if pod deleted
|
||||||
|
```bash
|
||||||
|
kubectl get pod
|
||||||
|
```
|
||||||
|
|
||||||
|
Outpput:
|
||||||
|
```
|
||||||
|
No resources found in default namespace.
|
||||||
|
```
|
||||||
|
|
||||||
|
Next: [Scheduler](./07-controller-manager.md)
|
||||||
Next: [Controller manager](./07-controller-manager.md)
|
Next: [Controller manager](./07-controller-manager.md)
|
|
@ -0,0 +1,264 @@
|
||||||
|
# Scheduler
|
||||||
|
|
||||||
|
In this section we will configure scheduler.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
> In Kubernetes, a scheduler is a core component responsible for assigning and placing workloads (such as pods) onto available nodes in a cluster. It ensures that the cluster's resources are utilized efficiently and that workloads are scheduled based on their resource requirements and other constraints.
|
||||||
|
> Kublet, regularly request the list of pods assigned to it. In case if new pod appear, kubelet will run new pod. In case if pod marked as deleted, kubelet will start termination process.
|
||||||
|
|
||||||
|
In previous section, we created pod and it was runed on the node, but why?
|
||||||
|
The reason of that, we specified the node name on which to run the pod by our self
|
||||||
|
```bash
|
||||||
|
nodeName: ${HOST_NAME}
|
||||||
|
```
|
||||||
|
|
||||||
|
So, lets create pod without node specified
|
||||||
|
```bash
|
||||||
|
{
|
||||||
|
cat <<EOF> pod.yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: hello-world
|
||||||
|
spec:
|
||||||
|
serviceAccountName: hello-world
|
||||||
|
containers:
|
||||||
|
- name: hello-world-container
|
||||||
|
image: busybox
|
||||||
|
command: ['sh', '-c', 'while true; do echo "Hello, World!"; sleep 1; done']
|
||||||
|
EOF
|
||||||
|
|
||||||
|
kubectl apply -f pod.yaml
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
And check pod status
|
||||||
|
```bash
|
||||||
|
kubectl get pod -o wide
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||||
|
hello-world 0/1 Pending 0 19s <none> <none> <none> <none>
|
||||||
|
```
|
||||||
|
|
||||||
|
As we can see node field of our pod is none and pod is in pending state.
|
||||||
|
So, lets, configure scheduler and check if it will solve the issue.
|
||||||
|
|
||||||
|
## certificates
|
||||||
|
|
||||||
|
We will start with certificates.
|
||||||
|
|
||||||
|
As you remeber we configured our API server cto use client certificate to authenticate user.
|
||||||
|
So, lets create proper certificate for the scheduler
|
||||||
|
```bash
|
||||||
|
{
|
||||||
|
cat > kube-scheduler-csr.json <<EOF
|
||||||
|
{
|
||||||
|
"CN": "system:kube-scheduler",
|
||||||
|
"key": {
|
||||||
|
"algo": "rsa",
|
||||||
|
"size": 2048
|
||||||
|
},
|
||||||
|
"names": [
|
||||||
|
{
|
||||||
|
"C": "US",
|
||||||
|
"L": "Portland",
|
||||||
|
"O": "system:kube-scheduler",
|
||||||
|
"OU": "Kubernetes The Hard Way",
|
||||||
|
"ST": "Oregon"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
cfssl gencert \
|
||||||
|
-ca=ca.pem \
|
||||||
|
-ca-key=ca-key.pem \
|
||||||
|
-config=ca-config.json \
|
||||||
|
-profile=kubernetes \
|
||||||
|
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The most interesting configuration options:
|
||||||
|
- cn(common name) - value, api server will use as a client name during authorization
|
||||||
|
- o(organozation) - user group api server will use during authorization
|
||||||
|
|
||||||
|
We specified "system:kube-scheduler" in the organization. It says api server that the client who uses which certificate belongs to the system:kube-scheduler group. Api server know, that this group is allowed to make proper modifications to pod specification to assign pod to node.
|
||||||
|
|
||||||
|
## serviece configuration
|
||||||
|
|
||||||
|
After the certificate files created we can create configuration files for the scheduler.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
{
|
||||||
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
|
--certificate-authority=ca.pem \
|
||||||
|
--embed-certs=true \
|
||||||
|
--server=https://127.0.0.1:6443 \
|
||||||
|
--kubeconfig=kube-scheduler.kubeconfig
|
||||||
|
|
||||||
|
kubectl config set-credentials system:kube-scheduler \
|
||||||
|
--client-certificate=kube-scheduler.pem \
|
||||||
|
--client-key=kube-scheduler-key.pem \
|
||||||
|
--embed-certs=true \
|
||||||
|
--kubeconfig=kube-scheduler.kubeconfig
|
||||||
|
|
||||||
|
kubectl config set-context default \
|
||||||
|
--cluster=kubernetes-the-hard-way \
|
||||||
|
--user=system:kube-scheduler \
|
||||||
|
--kubeconfig=kube-scheduler.kubeconfig
|
||||||
|
|
||||||
|
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
We created kubernetes configuration file, which says scheduler where api server is configured and which certificates to use communicating with it
|
||||||
|
|
||||||
|
Now, we can distribute created configuration file.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
||||||
|
```
|
||||||
|
|
||||||
|
In addition to this file, we will create one more configuration file for scheduler
|
||||||
|
|
||||||
|
```bash
|
||||||
|
{
|
||||||
|
mkdir /etc/kubernetes/config
|
||||||
|
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1beta1
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
clientConnection:
|
||||||
|
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
|
||||||
|
leaderElection:
|
||||||
|
leaderElect: true
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
After all configuration files created, we need to download scheduler binaries.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wget -q --show-progress --https-only --timestamping \
|
||||||
|
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler"
|
||||||
|
```
|
||||||
|
|
||||||
|
And install it
|
||||||
|
|
||||||
|
```bash
|
||||||
|
{
|
||||||
|
chmod +x kube-scheduler
|
||||||
|
sudo mv kube-scheduler /usr/local/bin/
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, we can create configuration file for scheduler service
|
||||||
|
```bash
|
||||||
|
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
|
||||||
|
[Unit]
|
||||||
|
Description=Kubernetes Scheduler
|
||||||
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/usr/local/bin/kube-scheduler \\
|
||||||
|
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
|
||||||
|
--v=2
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
After configuration file created, we need to run it
|
||||||
|
|
||||||
|
```bash
|
||||||
|
{
|
||||||
|
sudo systemctl daemon-reload
|
||||||
|
sudo systemctl enable kube-scheduler
|
||||||
|
sudo systemctl start kube-scheduler
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
And finally we check scheduler status
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo systemctl status kube-scheduler
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
● kube-scheduler.service - Kubernetes Scheduler
|
||||||
|
Loaded: loaded (/etc/systemd/system/kube-scheduler.service; enabled; vendor preset: enabled)
|
||||||
|
Active: active (running) since Thu 2023-04-20 11:57:44 UTC; 16s ago
|
||||||
|
Docs: https://github.com/kubernetes/kubernetes
|
||||||
|
Main PID: 15134 (kube-scheduler)
|
||||||
|
Tasks: 7 (limit: 2275)
|
||||||
|
Memory: 13.7M
|
||||||
|
CGroup: /system.slice/kube-scheduler.service
|
||||||
|
└─15134 /usr/local/bin/kube-scheduler --config=/etc/kubernetes/config/kube-scheduler.yaml --v=2
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
## verification
|
||||||
|
|
||||||
|
Now, when our scheduler is up and running, we can check if our pod is in runnign state.
|
||||||
|
```bash
|
||||||
|
kubectl get pod -o wide
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||||
|
hello-world 0/1 Pending 0 24m <none> <none> <none> <none>
|
||||||
|
```
|
||||||
|
|
||||||
|
As you can see, our pod still in pending mode.
|
||||||
|
|
||||||
|
To define the reason of this, we will review the logs of our scheduler.
|
||||||
|
```bash
|
||||||
|
...
|
||||||
|
May 21 20:52:25 example-server kube-scheduler[91664]: I0521 20:52:25.471604 91664 factory.go:338] "Unable to schedule pod; no fit; waiting" pod="default/hello-world" err="0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate."
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
As we can see our pod wasn't assigned to the node because node has some taint, lets check our node taints.
|
||||||
|
```bash
|
||||||
|
kubectl get nodes $(hostname -a) -o jsonpath='{.spec.taints}'
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
[{"effect":"NoSchedule","key":"node.kubernetes.io/not-ready"}]
|
||||||
|
```
|
||||||
|
|
||||||
|
As you can see, our node has taint with efect no schedule. The reason of this????
|
||||||
|
But lets fix this.
|
||||||
|
```bash
|
||||||
|
kubectl taint nodes $(hostname -a) node.kubernetes.io/not-ready:NoSchedule-
|
||||||
|
```
|
||||||
|
|
||||||
|
And check our pods list again
|
||||||
|
```bash
|
||||||
|
kubectl get pod -o wide
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||||
|
hello-world 1/1 Running 0 29m 10.240.1.3 example-server <none> <none>
|
||||||
|
```
|
||||||
|
|
||||||
|
As you can see out pod is in running state, means that scheduler works as expected.
|
||||||
|
|
||||||
|
Now we need to clean-up our wirkspace
|
||||||
|
```bash
|
||||||
|
kubectl delete -f pod.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Next: [Controller manager](./08-controller-manager.md )
|
|
@ -1,14 +1,14 @@
|
||||||
# Controller manager
|
# Controller manager
|
||||||
|
|
||||||

|
In this part we will configure controller-manager.
|
||||||
|
|
||||||
для того щоб відчути весь смак - давайте почнемо із того що вернемось трохи не до конфігураційних всяких штук а до абстракцій кубернетесу
|

|
||||||
|
|
||||||
і власне наша наступна абстракція - деплоймент
|
>Controller Manager is a core component responsible for managing various controllers that regulate the desired state of the cluster. It runs as a separate process on the Kubernetes control plane and includes several built-in controllers
|
||||||
|
|
||||||
тож давайте її створимо
|
|
||||||
|
|
||||||
|
To see controller manager in action, we will create deployment before controller manager configured.
|
||||||
```bash
|
```bash
|
||||||
|
{
|
||||||
cat <<EOF> deployment.yaml
|
cat <<EOF> deployment.yaml
|
||||||
apiVersion: apps/v1
|
apiVersion: apps/v1
|
||||||
kind: Deployment
|
kind: Deployment
|
||||||
|
@ -32,24 +32,31 @@ spec:
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
kubectl apply -f deployment.yaml
|
kubectl apply -f deployment.yaml
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Check created deployment status:
|
||||||
```bash
|
```bash
|
||||||
kubectl get deploy
|
kubectl get deploy
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||||
nginx-deployment 0/1 0 0 24s
|
nginx-deployment 0/1 0 0 24s
|
||||||
```
|
```
|
||||||
|
|
||||||
такс, щось пішло не так
|
As we can se our deployment isn't in ready state.
|
||||||
чомусь наші поди не створюються - а мають
|
|
||||||
|
|
||||||
за те щоб поди створювались відповідає контроллєр менеджер, а у нас його немає
|
As we already mentioned, in kubernetes controller manager is responsible to ensure that desired state of the cluster equals to the actual state. In our case it means that deployment controller should create replicaset and replicaset controller should create pod which will be assigned to nodes by scheduler. But as controller manager is not configured - nothing happen with created deployment.
|
||||||
тож давайте цю проблему вирішувати
|
|
||||||
|
|
||||||
|
So, lets configure controller manager.
|
||||||
|
|
||||||
|
## certificates
|
||||||
|
We will start with certificates.
|
||||||
|
|
||||||
|
As you remeber we configured our API server cto use client certificate to authenticate user.
|
||||||
|
So, lets create proper certificate for the controller manager
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
cat > kube-controller-manager-csr.json <<EOF
|
cat > kube-controller-manager-csr.json <<EOF
|
||||||
|
@ -77,10 +84,31 @@ cfssl gencert \
|
||||||
-config=ca-config.json \
|
-config=ca-config.json \
|
||||||
-profile=kubernetes \
|
-profile=kubernetes \
|
||||||
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
|
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
|
||||||
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Created certs:
|
||||||
|
```
|
||||||
|
kube-controller-manager.csr
|
||||||
|
kube-controller-manager-key.pem
|
||||||
|
kube-controller-manager.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
The most interesting configuration options:
|
||||||
|
- cn(common name) - value, api server will use as a client name during authorization
|
||||||
|
- o(organozation) - user group controller manager will use during authorization
|
||||||
|
|
||||||
|
We specified "system:kube-controller-manager" in the organization. It says api server that the client who uses which certificate belongs to the system:kube-controller-manager group.
|
||||||
|
|
||||||
|
Now, we will distribute ca certificate, this ????
|
||||||
|
```bash
|
||||||
|
sudo cp ca-key.pem /var/lib/kubernetes/
|
||||||
|
```
|
||||||
|
|
||||||
|
## configuration
|
||||||
|
|
||||||
|
After the certificate files created we can create configuration files for the controller manager.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
|
@ -104,11 +132,20 @@ cfssl gencert \
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
We created kubernetes configuration file, which says controller manager where api server is configured and which certificates to use communicating with it
|
||||||
|
|
||||||
|
Now, we can distribute created configuration file.
|
||||||
|
```bash
|
||||||
|
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
||||||
|
```
|
||||||
|
|
||||||
|
After all required configuration file created, we need to download controller manager binaries.
|
||||||
```bash
|
```bash
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager"
|
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
And install it
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
chmod +x kube-controller-manager
|
chmod +x kube-controller-manager
|
||||||
|
@ -116,11 +153,7 @@ wget -q --show-progress --https-only --timestamping \
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
```bash
|
Now, we can create configuration file for controller manager service
|
||||||
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
|
||||||
sudo cp ca-key.pem /var/lib/kubernetes/
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
|
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
|
||||||
[Unit]
|
[Unit]
|
||||||
|
@ -149,6 +182,7 @@ WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
|
After configuration file created, we can start controller manager
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
|
@ -157,10 +191,12 @@ EOF
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
And finaly we can check controller manadger status
|
||||||
```bash
|
```bash
|
||||||
sudo systemctl status kube-controller-manager
|
sudo systemctl status kube-controller-manager
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
● kube-controller-manager.service - Kubernetes Controller Manager
|
● kube-controller-manager.service - Kubernetes Controller Manager
|
||||||
Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; enabled; vendor preset: enabled)
|
Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; enabled; vendor preset: enabled)
|
||||||
|
@ -174,32 +210,31 @@ sudo systemctl status kube-controller-manager
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
ну контроллер менеджер як бачимо завівся, то може і поди створились?
|
As you can see our controller manager is up and running. So we can continue with our deployment.
|
||||||
давайте перевіримо
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl get pod
|
kubectl get deploy
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
NAME READY STATUS RESTARTS AGE
|
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||||
hello-world 1/1 Running 0 27m
|
deployment 1/1 1 1 2m8s
|
||||||
nginx-deployment-5d9cbcf759-x4pk8 0/1 Pending 0 79s
|
|
||||||
```
|
```
|
||||||
|
|
||||||
такс, ну под уже сворився, але він ще у статусі пендінг якось не особо цікаво
|
As you can see our deployment is up anr running, all desired pods are also in running state
|
||||||
|
|
||||||
відповідь на питання чого по ще досі не запущений дуже проста
|
|
||||||
```bash
|
```bash
|
||||||
kubectl get pod -o wide
|
kubectl get pods
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
NAME READY STATUS RESTARTS AGE
|
||||||
hello-world 1/1 Running 0 31m 10.240.1.9 example-server <none> <none>
|
deployment-74fc7cdd68-89rqw 1/1 Running 0 67s
|
||||||
nginx-deployment-5d9cbcf759-x4pk8 0/1 Pending 0 5m22s <none> <none> <none> <none>
|
|
||||||
```
|
```
|
||||||
|
|
||||||
бачимо що йому ніхто ще не проставив ноду, а без ноди кублєт сам не запустить под
|
Now, when our controller manager configured, lets clean up our workspace.
|
||||||
|
```bash
|
||||||
|
kubectl delete -f deployment.yaml
|
||||||
|
```
|
||||||
|
|
||||||
Next: [Scheduler](./08-scheduler.md)
|
Next: [Kube-proxy](./09-kubeproxy.md)
|
|
@ -1,151 +0,0 @@
|
||||||
# Scheduler
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
|
|
||||||
cat > kube-scheduler-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "system:kube-scheduler",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "system:kube-scheduler",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-profile=kubernetes \
|
|
||||||
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
|
|
||||||
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
|
||||||
--certificate-authority=ca.pem \
|
|
||||||
--embed-certs=true \
|
|
||||||
--server=https://127.0.0.1:6443 \
|
|
||||||
--kubeconfig=kube-scheduler.kubeconfig
|
|
||||||
|
|
||||||
kubectl config set-credentials system:kube-scheduler \
|
|
||||||
--client-certificate=kube-scheduler.pem \
|
|
||||||
--client-key=kube-scheduler-key.pem \
|
|
||||||
--embed-certs=true \
|
|
||||||
--kubeconfig=kube-scheduler.kubeconfig
|
|
||||||
|
|
||||||
kubectl config set-context default \
|
|
||||||
--cluster=kubernetes-the-hard-way \
|
|
||||||
--user=system:kube-scheduler \
|
|
||||||
--kubeconfig=kube-scheduler.kubeconfig
|
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
wget -q --show-progress --https-only --timestamping \
|
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler"
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
chmod +x kube-scheduler
|
|
||||||
sudo mv kube-scheduler /usr/local/bin/
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
|
|
||||||
apiVersion: kubescheduler.config.k8s.io/v1beta1
|
|
||||||
kind: KubeSchedulerConfiguration
|
|
||||||
clientConnection:
|
|
||||||
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
|
|
||||||
leaderElection:
|
|
||||||
leaderElect: true
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
|
|
||||||
[Unit]
|
|
||||||
Description=Kubernetes Scheduler
|
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStart=/usr/local/bin/kube-scheduler \\
|
|
||||||
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
|
|
||||||
--v=2
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
sudo systemctl enable kube-scheduler
|
|
||||||
sudo systemctl start kube-scheduler
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo systemctl status kube-scheduler
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
● kube-scheduler.service - Kubernetes Scheduler
|
|
||||||
Loaded: loaded (/etc/systemd/system/kube-scheduler.service; enabled; vendor preset: enabled)
|
|
||||||
Active: active (running) since Thu 2023-04-20 11:57:44 UTC; 16s ago
|
|
||||||
Docs: https://github.com/kubernetes/kubernetes
|
|
||||||
Main PID: 15134 (kube-scheduler)
|
|
||||||
Tasks: 7 (limit: 2275)
|
|
||||||
Memory: 13.7M
|
|
||||||
CGroup: /system.slice/kube-scheduler.service
|
|
||||||
└─15134 /usr/local/bin/kube-scheduler --config=/etc/kubernetes/config/kube-scheduler.yaml --v=2
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl get pod -o wide
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
hello-world 1/1 Running 0 35m 10.240.1.9 example-server <none> <none>
|
|
||||||
nginx-deployment-5d9cbcf759-x4pk8 1/1 Running 0 9m34s 10.240.1.10 example-server <none> <none>
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl logs nginx-deployment-5d9cbcf759-x4pk8
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
Hello, World from deployment!
|
|
||||||
Hello, World from deployment!
|
|
||||||
Hello, World from deployment!
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
Next: [Kube proxy](./09-kubeproxy.md)
|
|
|
@ -1,11 +1,26 @@
|
||||||
# Kubeproxy
|
# Kube-proxy
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
такс,
|
такс,
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
{
|
||||||
cat <<EOF> nginx-deployment.yml
|
cat <<EOF> nginx-deployment.yml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: nginx-conf
|
||||||
|
data:
|
||||||
|
default.conf: |
|
||||||
|
server {
|
||||||
|
listen 80;
|
||||||
|
server_name _;
|
||||||
|
location / {
|
||||||
|
return 200 "Hello from pod: \$hostname\n";
|
||||||
|
}
|
||||||
|
}
|
||||||
|
---
|
||||||
apiVersion: apps/v1
|
apiVersion: apps/v1
|
||||||
kind: Deployment
|
kind: Deployment
|
||||||
metadata:
|
metadata:
|
||||||
|
@ -25,25 +40,66 @@ spec:
|
||||||
image: nginx:1.21.3
|
image: nginx:1.21.3
|
||||||
ports:
|
ports:
|
||||||
- containerPort: 80
|
- containerPort: 80
|
||||||
|
volumeMounts:
|
||||||
|
- name: nginx-conf
|
||||||
|
mountPath: /etc/nginx/conf.d
|
||||||
|
volumes:
|
||||||
|
- name: nginx-conf
|
||||||
|
configMap:
|
||||||
|
name: nginx-conf
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
kubectl apply -f nginx-deployment.yml
|
kubectl apply -f nginx-deployment.yml
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl get pod -o wide
|
kubectl get pod -o wide
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||||
hello-world 1/1 Running 0 109m 10.240.1.9 example-server <none> <none>
|
nginx-deployment-db9778f94-2zv7x 1/1 Running 0 63s 10.240.1.12 example-server <none> <none>
|
||||||
nginx-deployment-5d9cbcf759-x4pk8 1/1 Running 0 84m 10.240.1.14 example-server <none> <none>
|
nginx-deployment-db9778f94-q5jx4 1/1 Running 0 63s 10.240.1.10 example-server <none> <none>
|
||||||
|
nginx-deployment-db9778f94-twx78 1/1 Running 0 63s 10.240.1.11 example-server <none> <none>
|
||||||
```
|
```
|
||||||
|
|
||||||
нам потрібна айпі адреса поду з деплойменту, в моєму випадку 10.240.1.10
|
now, we will run busybox container and will try to access our pods from other container
|
||||||
запам'ятаємо її
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
{
|
||||||
|
cat <<EOF> pod.yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: busy-box
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: busy-box
|
||||||
|
image: busybox
|
||||||
|
command: ['sh', '-c', 'while true; do echo "Busy"; sleep 1; done']
|
||||||
|
EOF
|
||||||
|
|
||||||
|
kubectl apply -f pod.yaml
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
and execute command from our container
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl exec busy-box -- wget -O - $(kubectl get pod -o wide | grep nginx | awk '{print $6}' | head -n 1)
|
||||||
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
|
```
|
||||||
|
error: unable to upgrade connection: Forbidden (user=kubernetes, verb=create, resource=nodes, subresource=proxy)
|
||||||
|
```
|
||||||
|
|
||||||
|
error occured because api server has no access to execute commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
{
|
||||||
cat <<EOF> rbac-create.yml
|
cat <<EOF> rbac-create.yml
|
||||||
kind: ClusterRole
|
kind: ClusterRole
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
@ -69,44 +125,27 @@ roleRef:
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
kubectl apply -f rbac-create.yml
|
kubectl apply -f rbac-create.yml
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
and execute command from our container
|
||||||
kubectl exec hello-world -- wget -O - 10.240.1.14
|
|
||||||
|
```bash
|
||||||
|
kubectl exec busy-box -- wget -O - $(kubectl get pod -o wide | grep nginx | awk '{print $6}' | head -n 1)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
<!DOCTYPE html>
|
Hello from pod: nginx-deployment-68b9c94586-qkwjc
|
||||||
<html>
|
Connecting to 10.32.0.230 (10.32.0.230:80)
|
||||||
<head>
|
|
||||||
<title>Welcome to nginx!</title>
|
|
||||||
<style>
|
|
||||||
html { color-scheme: light dark; }
|
|
||||||
body { width: 35em; margin: 0 auto;
|
|
||||||
font-family: Tahoma, Verdana, Arial, sans-serif; }
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<h1>Welcome to nginx!</h1>
|
|
||||||
<p>If you see this page, the nginx web server is successfully installed and
|
|
||||||
working. Further configuration is required.</p>
|
|
||||||
|
|
||||||
<p>For online documentation and support please refer to
|
|
||||||
<a href="http://nginx.org/">nginx.org</a>.<br/>
|
|
||||||
Commercial support is available at
|
|
||||||
<a href="http://nginx.com/">nginx.com</a>.</p>
|
|
||||||
|
|
||||||
<p><em>Thank you for using nginx.</em></p>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
Connecting to 10.240.1.14 (10.240.1.14:80)
|
|
||||||
writing to stdout
|
writing to stdout
|
||||||
- 100% |********************************| 615 0:00:00 ETA
|
- 100% |********************************| 50 0:00:00 ETA
|
||||||
written to stdout
|
written to stdout
|
||||||
```
|
```
|
||||||
|
|
||||||
але це не прикольно, хочу звертатись до нджінк деплойменту і щоб воно там само працювало
|
it is not very interesting to access pods by ip, we want to have some automatic load balancing
|
||||||
знаю що є сервіси - давай через них
|
we know that services may help us with that
|
||||||
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
|
@ -128,20 +167,30 @@ kubectl apply -f nginx-service.yml
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
get our server
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl get service
|
kubectl get service
|
||||||
```
|
```
|
||||||
|
|
||||||
такс тепер беремо айпішнік того сервісу (у моєму випадку 10.32.0.95)
|
and try to ping our containers by service ip
|
||||||
і спробуємо повторити те саме
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl exec hello-world -- wget -O - 10.32.0.95
|
kubectl exec busy-box -- wget -O - $(kubectl get service -o wide | grep nginx | awk '{print $3}')
|
||||||
```
|
```
|
||||||
|
|
||||||
і нічого (тут можна згадати ще про ендпоінти і тп, але то може бути просто на довго)
|
Output:
|
||||||
головна причина чого не працює на даному етапі - у нас не запущений ще 1 важливий компонент
|
```
|
||||||
а саме куб проксі
|
Connecting to 10.32.0.230 (10.32.0.230:80)
|
||||||
|
```
|
||||||
|
|
||||||
|
hm, nothing happen, the reason - our cluster do not know how to connect to service ip
|
||||||
|
|
||||||
|
this is responsibiltiy of kube-proxy
|
||||||
|
|
||||||
|
it means that we need to configure kube-proxy
|
||||||
|
|
||||||
|
as usually we will start with certs
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
|
@ -174,6 +223,7 @@ cfssl gencert \
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
now connection config
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
|
@ -198,16 +248,22 @@ cfssl gencert \
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
now, download kube-proxy
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
wget -q --show-progress --https-only --timestamping \
|
wget -q --show-progress --https-only --timestamping \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy
|
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy
|
||||||
```
|
```
|
||||||
|
|
||||||
|
create proper folders
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo mkdir -p \
|
sudo mkdir -p \
|
||||||
/var/lib/kube-proxy
|
/var/lib/kube-proxy
|
||||||
```
|
```
|
||||||
|
|
||||||
|
install binaries
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
chmod +x kube-proxy
|
chmod +x kube-proxy
|
||||||
|
@ -215,10 +271,14 @@ sudo mkdir -p \
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
move connection config to proper folder
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
||||||
```
|
```
|
||||||
|
|
||||||
|
create kube-proxy config file
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
|
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
|
||||||
kind: KubeProxyConfiguration
|
kind: KubeProxyConfiguration
|
||||||
|
@ -230,6 +290,8 @@ clusterCIDR: "10.200.0.0/16"
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
|
create kube-proxy service configufile
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
|
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
|
||||||
[Unit]
|
[Unit]
|
||||||
|
@ -247,6 +309,8 @@ WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
|
start kube-proxy
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
{
|
{
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
|
@ -255,10 +319,13 @@ EOF
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
and check its status
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo systemctl status kube-proxy
|
sudo systemctl status kube-proxy
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Output:
|
||||||
```
|
```
|
||||||
● kube-proxy.service - Kubernetes Kube Proxy
|
● kube-proxy.service - Kubernetes Kube Proxy
|
||||||
Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: enabled)
|
Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: enabled)
|
||||||
|
@ -272,42 +339,22 @@ sudo systemctl status kube-proxy
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
ну що, куб проксі поставили - потрібно провіряти
|
and now we can check the access to service ip once again
|
||||||
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl exec hello-world -- wget -O - 10.32.0.95
|
kubectl exec busy-box -- wget -O - $(kubectl get service -o wide | grep nginx | awk '{print $3}')
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```
|
||||||
<!DOCTYPE html>
|
Hello from pod: nginx-deployment-68b9c94586-qkwjc
|
||||||
<html>
|
Connecting to 10.32.0.230 (10.32.0.230:80)
|
||||||
<head>
|
|
||||||
<title>Welcome to nginx!</title>
|
|
||||||
<style>
|
|
||||||
html { color-scheme: light dark; }
|
|
||||||
body { width: 35em; margin: 0 auto;
|
|
||||||
font-family: Tahoma, Verdana, Arial, sans-serif; }
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<h1>Welcome to nginx!</h1>
|
|
||||||
<p>If you see this page, the nginx web server is successfully installed and
|
|
||||||
working. Further configuration is required.</p>
|
|
||||||
|
|
||||||
<p>For online documentation and support please refer to
|
|
||||||
<a href="http://nginx.org/">nginx.org</a>.<br/>
|
|
||||||
Commercial support is available at
|
|
||||||
<a href="http://nginx.com/">nginx.com</a>.</p>
|
|
||||||
|
|
||||||
<p><em>Thank you for using nginx.</em></p>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
Connecting to 10.32.0.95 (10.32.0.95:80)
|
|
||||||
writing to stdout
|
writing to stdout
|
||||||
- 100% |********************************| 615 0:00:00 ETA
|
- 100% |********************************| 50 0:00:00 ETA
|
||||||
written to stdout
|
written to stdout
|
||||||
```
|
```
|
||||||
ух ти у нас все вийшло
|
|
||||||
|
if you try to repeat the command once again you will see that requests are handled by different pods
|
||||||
|
|
||||||
|
great we successfully configured kubeproxy and can balance trafic between containers
|
||||||
|
|
||||||
Next: [DNS in Kubernetes](./10-dns.md)
|
Next: [DNS in Kubernetes](./10-dns.md)
|
|
@ -1,52 +1,36 @@
|
||||||
# dns
|
# DNS in Kubernetes
|
||||||
|
|
||||||
такс, це звісно приколно що можна по айпішніку, але я читав що можна по назві сервісу звертатись
|
Again, it is very interesting to access the service by ip but we know that we can access it by service name
|
||||||
|
Lets try,
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl exec hello-world -- wget -O - nginx-service
|
kubectl exec busy-box -- wget -O - nginx-service
|
||||||
```
|
```
|
||||||
|
|
||||||
не особо працює, щось пішло не так
|
and nothing happen
|
||||||
|
|
||||||
а так тому, що ми не поставили деенес адон
|
the reason is DNS server which we still not configured
|
||||||
але нічого, зараз ми то виправимо
|
|
||||||
|
but dns server we can install from kubernetes directly
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
|
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
ну у мене не особо запрацювало
|
and try to erpeat
|
||||||
потрібно зробити зміни у кублєті
|
|
||||||
```bash
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
kubectl exec busy-box -- wget -O - nginx-service
|
||||||
[Unit]
|
|
||||||
Description=Kubernetes Kubelet
|
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
|
||||||
After=containerd.service
|
|
||||||
Requires=containerd.service
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStart=/usr/local/bin/kubelet \\
|
|
||||||
--config=/var/lib/kubelet/kubelet-config.yaml \\
|
|
||||||
--container-runtime=remote \\
|
|
||||||
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
|
|
||||||
--image-pull-progress-deadline=2m \\
|
|
||||||
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
|
||||||
--network-plugin=cni \\
|
|
||||||
--register-node=true \\
|
|
||||||
--v=2
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
Output:
|
||||||
sudo systemctl daemon-reload
|
```
|
||||||
sudo systemctl enable kubelet
|
Hello from pod: nginx-deployment-68b9c94586-zh9vn
|
||||||
sudo systemctl restart kubelet
|
Connecting to nginx-service (10.32.0.230:80)
|
||||||
}
|
writing to stdout
|
||||||
```
|
- 100% |********************************| 50 0:00:00 ETA
|
||||||
|
written to stdout
|
||||||
|
```
|
||||||
|
|
||||||
|
great, everything works as expected
|
|
@ -1,50 +0,0 @@
|
||||||
db changes after release:
|
|
||||||
- table: subscripsion
|
|
||||||
changes:
|
|
||||||
- EnableLoggingFunctionality remove
|
|
||||||
- SendLogNotifications remove
|
|
||||||
|
|
||||||
- table: FragmentSettings
|
|
||||||
changes: remove
|
|
||||||
- table: FragmentResults
|
|
||||||
changes: remove
|
|
||||||
- table: PrecalculatedFragmentResults
|
|
||||||
changes: remove
|
|
||||||
|
|
||||||
- table: Components
|
|
||||||
changes: remove
|
|
||||||
- table: ScoreProductResults
|
|
||||||
changes: remove
|
|
||||||
- table: PrecalculatedScoreResults
|
|
||||||
changes: remove
|
|
||||||
|
|
||||||
- table: DatasetInsights
|
|
||||||
changes: remove
|
|
||||||
- table: PrecalculatedDatasetInsights
|
|
||||||
changes: remove
|
|
||||||
|
|
||||||
- table: ScoringEngineVerifications
|
|
||||||
changes: remove
|
|
||||||
- table: ScoringEngineVerificationItems
|
|
||||||
changes: remove
|
|
||||||
- table: Profiles
|
|
||||||
changes: remove
|
|
||||||
- table: ProfileFields
|
|
||||||
changes: remove
|
|
||||||
|
|
||||||
- table: WebDatasetChunks
|
|
||||||
changes: removed
|
|
||||||
- table: WebEnvironments
|
|
||||||
changes: removed
|
|
||||||
- table: WebDatasets
|
|
||||||
changes: remove
|
|
||||||
|
|
||||||
- table: MobileDatasets
|
|
||||||
changes:
|
|
||||||
- FileSize remove
|
|
||||||
- SdkIdentifier remove
|
|
||||||
|
|
||||||
- table: Datasets
|
|
||||||
changes:
|
|
||||||
- IX_JsonId - remove index
|
|
||||||
- JsonId - remove column
|
|
Before Width: | Height: | Size: 43 KiB After Width: | Height: | Size: 43 KiB |
Before Width: | Height: | Size: 43 KiB After Width: | Height: | Size: 43 KiB |
|
@ -1,959 +0,0 @@
|
||||||
```
|
|
||||||
{
|
|
||||||
wget -q --show-progress --https-only --timestamping \
|
|
||||||
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \
|
|
||||||
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
|
|
||||||
chmod +x cfssl cfssljson
|
|
||||||
sudo mv cfssl cfssljson /usr/local/bin/
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
|
|
||||||
cat > ca-config.json <<EOF
|
|
||||||
{
|
|
||||||
"signing": {
|
|
||||||
"default": {
|
|
||||||
"expiry": "8760h"
|
|
||||||
},
|
|
||||||
"profiles": {
|
|
||||||
"kubernetes": {
|
|
||||||
"usages": ["signing", "key encipherment", "server auth", "client auth"],
|
|
||||||
"expiry": "8760h"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cat > ca-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "Kubernetes",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "Kubernetes",
|
|
||||||
"OU": "CA",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
|
|
||||||
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Результат:
|
|
||||||
```
|
|
||||||
ca-key.pem
|
|
||||||
ca.csr
|
|
||||||
ca.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
|
|
||||||
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
|
|
||||||
|
|
||||||
cat > kubernetes-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "kubernetes",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "Kubernetes",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-hostname=worker,127.0.0.1,${KUBERNETES_HOSTNAMES} \
|
|
||||||
-profile=kubernetes \
|
|
||||||
kubernetes-csr.json | cfssljson -bare kubernetes
|
|
||||||
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Завантажимо etcd
|
|
||||||
```
|
|
||||||
wget -q --show-progress --https-only --timestamping \
|
|
||||||
"https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
|
|
||||||
```
|
|
||||||
|
|
||||||
Розпакувати і помістити etcd у диреторію /usr/local/bin/
|
|
||||||
```
|
|
||||||
{
|
|
||||||
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
|
|
||||||
sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
sudo mkdir -p /etc/etcd /var/lib/etcd
|
|
||||||
sudo chmod 700 /var/lib/etcd
|
|
||||||
sudo cp ca.pem \
|
|
||||||
kubernetes.pem kubernetes-key.pem \
|
|
||||||
/etc/etcd/
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
|
|
||||||
[Unit]
|
|
||||||
Description=etcd
|
|
||||||
Documentation=https://github.com/coreos
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=notify
|
|
||||||
ExecStart=/usr/local/bin/etcd \\
|
|
||||||
--name etcd \\
|
|
||||||
--cert-file=/etc/etcd/kubernetes.pem \\
|
|
||||||
--key-file=/etc/etcd/kubernetes-key.pem \\
|
|
||||||
--trusted-ca-file=/etc/etcd/ca.pem \\
|
|
||||||
--client-cert-auth \\
|
|
||||||
--listen-client-urls https://127.0.0.1:2379 \\
|
|
||||||
--advertise-client-urls https://127.0.0.1:2379 \\
|
|
||||||
--data-dir=/var/lib/etcd
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
sudo systemctl enable etcd
|
|
||||||
sudo systemctl start etcd
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo ETCDCTL_API=3 etcdctl member list \
|
|
||||||
--endpoints=https://127.0.0.1:2379 \
|
|
||||||
--cacert=/etc/etcd/ca.pem \
|
|
||||||
--cert=/etc/etcd/kubernetes.pem \
|
|
||||||
--key=/etc/etcd/kubernetes-key.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
api server
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
cat > service-account-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "service-accounts",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "Kubernetes",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-profile=kubernetes \
|
|
||||||
service-account-csr.json | cfssljson -bare service-account
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
cat > admin-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "admin",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "system:masters",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-profile=kubernetes \
|
|
||||||
admin-csr.json | cfssljson -bare admin
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
cat > encryption-config.yaml <<EOF
|
|
||||||
kind: EncryptionConfig
|
|
||||||
apiVersion: v1
|
|
||||||
resources:
|
|
||||||
- resources:
|
|
||||||
- secrets
|
|
||||||
providers:
|
|
||||||
- aescbc:
|
|
||||||
keys:
|
|
||||||
- name: key1
|
|
||||||
secret: ${ENCRYPTION_KEY}
|
|
||||||
- identity: {}
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo mkdir -p /etc/kubernetes/config
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
wget -q --show-progress --https-only --timestamping \
|
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver"
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
chmod +x kube-apiserver
|
|
||||||
sudo mv kube-apiserver /usr/local/bin/
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
sudo mkdir -p /var/lib/kubernetes/
|
|
||||||
|
|
||||||
sudo cp \
|
|
||||||
ca.pem \
|
|
||||||
kubernetes.pem kubernetes-key.pem \
|
|
||||||
encryption-config.yaml \
|
|
||||||
service-account-key.pem service-account.pem \
|
|
||||||
/var/lib/kubernetes/
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo mkdir -p /etc/kubernetes/config
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```
|
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
|
|
||||||
[Unit]
|
|
||||||
Description=Kubernetes API Server
|
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStart=/usr/local/bin/kube-apiserver \\
|
|
||||||
--advertise-address='91.107.220.4' \\
|
|
||||||
--allow-privileged='true' \\
|
|
||||||
--apiserver-count='3' \\
|
|
||||||
--audit-log-maxage='30' \\
|
|
||||||
--audit-log-maxbackup='3' \\
|
|
||||||
--audit-log-maxsize='100' \\
|
|
||||||
--audit-log-path='/var/log/audit.log' \\
|
|
||||||
--authorization-mode='Node,RBAC' \\
|
|
||||||
--bind-address='0.0.0.0' \\
|
|
||||||
--client-ca-file='/var/lib/kubernetes/ca.pem' \\
|
|
||||||
--enable-admission-plugins='NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota' \\
|
|
||||||
--etcd-cafile='/var/lib/kubernetes/ca.pem' \\
|
|
||||||
--etcd-certfile='/var/lib/kubernetes/kubernetes.pem' \\
|
|
||||||
--etcd-keyfile='/var/lib/kubernetes/kubernetes-key.pem' \\
|
|
||||||
--etcd-servers='https://127.0.0.1:2379' \\
|
|
||||||
--event-ttl='1h' \\
|
|
||||||
--encryption-provider-config='/var/lib/kubernetes/encryption-config.yaml' \\
|
|
||||||
--kubelet-certificate-authority='/var/lib/kubernetes/ca.pem' \\
|
|
||||||
--kubelet-client-certificate='/var/lib/kubernetes/kubernetes.pem' \\
|
|
||||||
--kubelet-client-key='/var/lib/kubernetes/kubernetes-key.pem' \\
|
|
||||||
--runtime-config='api/all=true' \\
|
|
||||||
--service-account-key-file='/var/lib/kubernetes/service-account.pem' \\
|
|
||||||
--service-cluster-ip-range='10.32.0.0/24' \\
|
|
||||||
--service-node-port-range='30000-32767' \\
|
|
||||||
--tls-cert-file='/var/lib/kubernetes/kubernetes.pem' \\
|
|
||||||
--tls-private-key-file='/var/lib/kubernetes/kubernetes-key.pem' \\
|
|
||||||
--service-account-signing-key-file='/var/lib/kubernetes/service-account-key.pem' \\
|
|
||||||
--service-account-issuer='https://kubernetes.default.svc.cluster.local' \\
|
|
||||||
--api-audiences='https://kubernetes.default.svc.cluster.local' \\
|
|
||||||
--v='2'
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
sudo systemctl enable kube-apiserver
|
|
||||||
sudo systemctl start kube-apiserver
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
wget -q --show-progress --https-only --timestamping \
|
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
|
|
||||||
&& chmod +x kubectl \
|
|
||||||
&& sudo mv kubectl /usr/local/bin/
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
|
||||||
--certificate-authority=ca.pem \
|
|
||||||
--embed-certs=true \
|
|
||||||
--server=https://127.0.0.1:6443 \
|
|
||||||
--kubeconfig=admin.kubeconfig
|
|
||||||
|
|
||||||
kubectl config set-credentials admin \
|
|
||||||
--client-certificate=admin.pem \
|
|
||||||
--client-key=admin-key.pem \
|
|
||||||
--embed-certs=true \
|
|
||||||
--kubeconfig=admin.kubeconfig
|
|
||||||
|
|
||||||
kubectl config set-context default \
|
|
||||||
--cluster=kubernetes-the-hard-way \
|
|
||||||
--user=admin \
|
|
||||||
--kubeconfig=admin.kubeconfig
|
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=admin.kubeconfig
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl version --kubeconfig=admin.kubeconfig
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
cat <<EOF> pod.yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
name: hello-world
|
|
||||||
spec:
|
|
||||||
serviceAccountName: hello-world
|
|
||||||
containers:
|
|
||||||
- name: hello-world-container
|
|
||||||
image: busybox
|
|
||||||
command: ['sh', '-c', 'while true; do echo "Hello, World!"; sleep 1; done']
|
|
||||||
nodeName: worker
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cat <<EOF> sa.yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: ServiceAccount
|
|
||||||
metadata:
|
|
||||||
name: hello-world
|
|
||||||
automountServiceAccountToken: false
|
|
||||||
EOF
|
|
||||||
|
|
||||||
kubectl apply -f sa.yaml --kubeconfig=admin.kubeconfig
|
|
||||||
kubectl apply -f pod.yaml --kubeconfig=admin.kubeconfig
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
kubelet
|
|
||||||
|
|
||||||
????, ага ще напевно потрібно виписувати сертифікати на публічний айпішнік
|
|
||||||
```bash
|
|
||||||
sudo echo "127.0.0.1 worker" >> /etc/hosts
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
cat > kubelet-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "system:node:worker",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "system:nodes",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-hostname=127.0.0.1 \
|
|
||||||
-profile=kubernetes \
|
|
||||||
kubelet-csr.json | cfssljson -bare kubelet
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
sudo apt-get update
|
|
||||||
sudo apt-get -y install socat conntrack ipset
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo swapon --show
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo swapoff -a
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```bash
|
|
||||||
wget -q --show-progress --https-only --timestamping \
|
|
||||||
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64 \
|
|
||||||
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz \
|
|
||||||
https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz \
|
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy \
|
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo mkdir -p \
|
|
||||||
/etc/cni/net.d \
|
|
||||||
/opt/cni/bin \
|
|
||||||
/var/lib/kubelet \
|
|
||||||
/var/lib/kube-proxy \
|
|
||||||
/var/lib/kubernetes \
|
|
||||||
/var/run/kubernetes
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
mkdir containerd
|
|
||||||
tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
|
|
||||||
sudo tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
|
|
||||||
sudo mv runc.amd64 runc
|
|
||||||
chmod +x kube-proxy kubelet runc
|
|
||||||
sudo mv kube-proxy kubelet runc /usr/local/bin/
|
|
||||||
sudo mv containerd/bin/* /bin/
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
|
|
||||||
{
|
|
||||||
"cniVersion": "0.4.0",
|
|
||||||
"name": "bridge",
|
|
||||||
"type": "bridge",
|
|
||||||
"bridge": "cnio0",
|
|
||||||
"isGateway": true,
|
|
||||||
"ipMasq": true,
|
|
||||||
"ipam": {
|
|
||||||
"type": "host-local",
|
|
||||||
"ranges": [
|
|
||||||
[{"subnet": "10.240.1.0/24"}]
|
|
||||||
],
|
|
||||||
"routes": [{"dst": "0.0.0.0/0"}]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
|
|
||||||
{
|
|
||||||
"cniVersion": "0.4.0",
|
|
||||||
"name": "lo",
|
|
||||||
"type": "loopback"
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo mkdir -p /etc/containerd/
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat << EOF | sudo tee /etc/containerd/config.toml
|
|
||||||
[plugins]
|
|
||||||
[plugins.cri.containerd]
|
|
||||||
snapshotter = "overlayfs"
|
|
||||||
[plugins.cri.containerd.default_runtime]
|
|
||||||
runtime_type = "io.containerd.runtime.v1.linux"
|
|
||||||
runtime_engine = "/usr/local/bin/runc"
|
|
||||||
runtime_root = ""
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
|
|
||||||
[Unit]
|
|
||||||
Description=containerd container runtime
|
|
||||||
Documentation=https://containerd.io
|
|
||||||
After=network.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStartPre=/sbin/modprobe overlay
|
|
||||||
ExecStart=/bin/containerd
|
|
||||||
Restart=always
|
|
||||||
RestartSec=5
|
|
||||||
Delegate=yes
|
|
||||||
KillMode=process
|
|
||||||
OOMScoreAdjust=-999
|
|
||||||
LimitNOFILE=1048576
|
|
||||||
LimitNPROC=infinity
|
|
||||||
LimitCORE=infinity
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
|
||||||
--certificate-authority=ca.pem \
|
|
||||||
--embed-certs=true \
|
|
||||||
--server=https://127.0.0.1:6443 \
|
|
||||||
--kubeconfig=kubelet.kubeconfig
|
|
||||||
|
|
||||||
kubectl config set-credentials system:node:worker \
|
|
||||||
--client-certificate=kubelet.pem \
|
|
||||||
--client-key=kubelet-key.pem \
|
|
||||||
--embed-certs=true \
|
|
||||||
--kubeconfig=kubelet.kubeconfig
|
|
||||||
|
|
||||||
kubectl config set-context default \
|
|
||||||
--cluster=kubernetes-the-hard-way \
|
|
||||||
--user=system:node:worker \
|
|
||||||
--kubeconfig=kubelet.kubeconfig
|
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=kubelet.kubeconfig
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
sudo cp kubelet-key.pem kubelet.pem /var/lib/kubelet/
|
|
||||||
sudo cp kubelet.kubeconfig /var/lib/kubelet/kubeconfig
|
|
||||||
sudo cp ca.pem /var/lib/kubernetes/
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
|
||||||
kind: KubeletConfiguration
|
|
||||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
|
||||||
authentication:
|
|
||||||
anonymous:
|
|
||||||
enabled: false
|
|
||||||
webhook:
|
|
||||||
enabled: true
|
|
||||||
x509:
|
|
||||||
clientCAFile: "/var/lib/kubernetes/ca.pem"
|
|
||||||
authorization:
|
|
||||||
mode: Webhook
|
|
||||||
clusterDomain: "cluster.local"
|
|
||||||
clusterDNS:
|
|
||||||
- "10.32.0.10"
|
|
||||||
podCIDR: "10.240.1.0/24"
|
|
||||||
resolvConf: "/run/systemd/resolve/resolv.conf"
|
|
||||||
runtimeRequestTimeout: "15m"
|
|
||||||
tlsCertFile: "/var/lib/kubelet/kubelet.pem"
|
|
||||||
tlsPrivateKeyFile: "/var/lib/kubelet/kubelet-key.pem"
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
|
||||||
[Unit]
|
|
||||||
Description=Kubernetes Kubelet
|
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
|
||||||
After=containerd.service
|
|
||||||
Requires=containerd.service
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStart=/usr/local/bin/kubelet \\
|
|
||||||
--config=/var/lib/kubelet/kubelet-config.yaml \\
|
|
||||||
--container-runtime=remote \\
|
|
||||||
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
|
|
||||||
--image-pull-progress-deadline=2m \\
|
|
||||||
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
|
||||||
--network-plugin=cni \\
|
|
||||||
--register-node=true \\
|
|
||||||
--hostname-override=worker \\
|
|
||||||
--v=2
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
sudo systemctl enable kubelet
|
|
||||||
sudo systemctl start kubelet
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl get nodes --kubeconfig=admin.kubeconfig
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl get pod --kubeconfig=admin.kubeconfig
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat <<EOF> nginx-pod.yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
name: nginx-pod
|
|
||||||
spec:
|
|
||||||
serviceAccountName: hello-world
|
|
||||||
containers:
|
|
||||||
- name: nginx-container
|
|
||||||
image: nginx
|
|
||||||
ports:
|
|
||||||
- containerPort: 80
|
|
||||||
nodeName: worker
|
|
||||||
EOF
|
|
||||||
|
|
||||||
|
|
||||||
kubectl apply -f nginx-pod.yaml --kubeconfig=admin.kubeconfig
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl get pod nginx-pod --kubeconfig=admin.kubeconfig -o=jsonpath='{.status.podIP}'
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl $(kubectl get pod nginx-pod --kubeconfig=admin.kubeconfig -o=jsonpath='{.status.podIP}')
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl delete -f nginx-pod.yaml --kubeconfig=admin.kubeconfig
|
|
||||||
kubectl delete -f pod.yaml --kubeconfig=admin.kubeconfig
|
|
||||||
kubectl delete -f sa.yaml --kubeconfig=admin.kubeconfig
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat <<EOF> nginx-deployment.yaml
|
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: nginx-deployment
|
|
||||||
spec:
|
|
||||||
replicas: 3
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: nginx
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: nginx
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: nginx-container
|
|
||||||
image: nginx
|
|
||||||
ports:
|
|
||||||
- containerPort: 80
|
|
||||||
EOF
|
|
||||||
|
|
||||||
kubectl apply -f nginx-deployment.yaml --kubeconfig=admin.kubeconfig
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl get pod --kubeconfig=admin.kubeconfig
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl get deployment --kubeconfig=admin.kubeconfig
|
|
||||||
```
|
|
||||||
такс деплоймент є а подів немає - неподобство
|
|
||||||
|
|
||||||
# controller manager
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
cat > kube-controller-manager-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "system:kube-controller-manager",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "system:kube-controller-manager",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-profile=kubernetes \
|
|
||||||
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
|
|
||||||
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
|
||||||
--certificate-authority=ca.pem \
|
|
||||||
--embed-certs=true \
|
|
||||||
--server=https://127.0.0.1:6443 \
|
|
||||||
--kubeconfig=kube-controller-manager.kubeconfig
|
|
||||||
|
|
||||||
kubectl config set-credentials system:kube-controller-manager \
|
|
||||||
--client-certificate=kube-controller-manager.pem \
|
|
||||||
--client-key=kube-controller-manager-key.pem \
|
|
||||||
--embed-certs=true \
|
|
||||||
--kubeconfig=kube-controller-manager.kubeconfig
|
|
||||||
|
|
||||||
kubectl config set-context default \
|
|
||||||
--cluster=kubernetes-the-hard-way \
|
|
||||||
--user=system:kube-controller-manager \
|
|
||||||
--kubeconfig=kube-controller-manager.kubeconfig
|
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
wget -q --show-progress --https-only --timestamping \
|
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager"
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
chmod +x kube-controller-manager
|
|
||||||
sudo mv kube-controller-manager /usr/local/bin/
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
|
||||||
sudo cp ca-key.pem /var/lib/kubernetes/
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
|
|
||||||
[Unit]
|
|
||||||
Description=Kubernetes Controller Manager
|
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStart=/usr/local/bin/kube-controller-manager \\
|
|
||||||
--bind-address=0.0.0.0 \\
|
|
||||||
--cluster-cidr=10.200.0.0/16 \\
|
|
||||||
--cluster-name=kubernetes \\
|
|
||||||
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
|
|
||||||
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
|
|
||||||
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
|
|
||||||
--leader-elect=true \\
|
|
||||||
--root-ca-file=/var/lib/kubernetes/ca.pem \\
|
|
||||||
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
|
|
||||||
--service-cluster-ip-range=10.32.0.0/24 \\
|
|
||||||
--use-service-account-credentials=true \\
|
|
||||||
--v=2
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
sudo systemctl enable kube-controller-manager
|
|
||||||
sudo systemctl start kube-controller-manager
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl get pod --kubeconfig=admin.kubeconfig
|
|
||||||
```
|
|
||||||
такс, бачимо що наші поди створились
|
|
||||||
але вони незапускаються ніяк
|
|
||||||
|
|
||||||
|
|
||||||
# kube scheduler
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
|
|
||||||
cat > kube-scheduler-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "system:kube-scheduler",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "system:kube-scheduler",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-profile=kubernetes \
|
|
||||||
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
|
|
||||||
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
|
||||||
--certificate-authority=ca.pem \
|
|
||||||
--embed-certs=true \
|
|
||||||
--server=https://127.0.0.1:6443 \
|
|
||||||
--kubeconfig=kube-scheduler.kubeconfig
|
|
||||||
|
|
||||||
kubectl config set-credentials system:kube-scheduler \
|
|
||||||
--client-certificate=kube-scheduler.pem \
|
|
||||||
--client-key=kube-scheduler-key.pem \
|
|
||||||
--embed-certs=true \
|
|
||||||
--kubeconfig=kube-scheduler.kubeconfig
|
|
||||||
|
|
||||||
kubectl config set-context default \
|
|
||||||
--cluster=kubernetes-the-hard-way \
|
|
||||||
--user=system:kube-scheduler \
|
|
||||||
--kubeconfig=kube-scheduler.kubeconfig
|
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
wget -q --show-progress --https-only --timestamping \
|
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler"
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
chmod +x kube-scheduler
|
|
||||||
sudo mv kube-scheduler /usr/local/bin/
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
|
|
||||||
apiVersion: kubescheduler.config.k8s.io/v1beta1
|
|
||||||
kind: KubeSchedulerConfiguration
|
|
||||||
clientConnection:
|
|
||||||
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
|
|
||||||
leaderElection:
|
|
||||||
leaderElect: true
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
|
|
||||||
[Unit]
|
|
||||||
Description=Kubernetes Scheduler
|
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStart=/usr/local/bin/kube-scheduler \\
|
|
||||||
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
|
|
||||||
--v=2
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
{
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
sudo systemctl enable kube-scheduler
|
|
||||||
sudo systemctl start kube-scheduler
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl get pod --kubeconfig=admin.kubeconfig
|
|
||||||
```
|
|
||||||
нарешті ми бачимо наші поди, вони запущені і ми навіть можемо перевірити чи вони працюють
|
|
||||||
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl $(kubectl get pods -l app=nginx --kubeconfig=admin.kubeconfig -o=jsonpath='{.items[0].status.podIP}')
|
|
||||||
```
|
|
||||||
чотко, бачимо що запустилось і працює
|
|
Loading…
Reference in New Issue