Update ETCD manual

pull/863/head
rsavchuk 2023-05-03 17:01:30 +02:00
parent e6c5f0c17a
commit 4d5041e29e
7 changed files with 52 additions and 21 deletions

View File

@ -198,6 +198,7 @@ May 03 13:43:43 example-server kubelet[23701]: I0503 13:43:43.862719 23701 eve
As we can see cni plugin is not initialized. But what is cni plugin.
> CNI stands for Container Networking Interface. It is a standard for defining how network connectivity is established and managed between containers, as well as between containers and the host system in a container runtime environment. Kubernetes uses CNI plugins to implement networking for pods.
> A CNI plugin is a binary executable that is responsible for configuring the network interfaces and routes of a container or pod. It communicates with the container runtime (such as Docker or CRI-O) to set up networking for the container or pod.
As we can see kubelet can't configure network for pod by himself, same as with containers, to configure network kubelet use some 'protocol' to communicate with 'someone' who can configure networ.
@ -508,4 +509,4 @@ Output:
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
```
Next: [ETCD](./docs/04-etcd.md)
Next: [ETCD](./04-etcd.md)

View File

@ -1,13 +1,13 @@
# ETCD
At this point we already know that we can run pods even withour API server. But current aproach os not very confortable to use, to create pod we need to place some manifest in some place. it is not very comfortable to manage. Now we will start our jorney of configuring "real" kubernetes. And of cource all our manifests should be stored somewhere.
![image](./img/04_cluster_architecture_etcd.png "Kubelet")
це все звісно прикольно але потрібно всетаки почати конфігурувати нормальний кубернетес
а для цього нам потрібно мати базу данних де можуть зберігатись всі необхідні кубернетесу речі
For kubernetes (at least for original one it I can say so) we need to configura database called ETCD.
і відповідно почати потрібно із ітісіді
To configure db (and other kubennetes components in future) we will need some tools to configure certificates.
потрібно встановити всі необхідні нам інструменти для генерації сертифікатів
```bash
{
wget -q --show-progress --https-only --timestamping \
@ -18,7 +18,9 @@
}
```
тепер потрібно згенерувати сертифікат яким ми будемо підписувати всі інші сертифікати
And now lets begin our etcd configuration journey.
First of all we will create ca certificate file.
```bash
{
@ -61,14 +63,15 @@ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
```
Результат:
Generated files:
```
ca-key.pem
ca.csr
ca.pem
```
такс, а тепер нам потрібно згенерувати сертифікат який уже власне буде використовуватись самим ітісіді (але якщо бути точним то не тільки, але про то ми дізнаємось трохи згодом)
Now, we need to create certificate which will be used by ETCD (not only ETCD, but about that in next parts) as server cert.
```bash
{
HOST_NAME=$(hostname -a)
@ -103,22 +106,31 @@ cfssl gencert \
}
```
Завантажимо etcd
Generated files:
```
kubernetes.csr
kubernetes-key.pem
kubernetes.pem
```
Now, when we have all required certs, we need to download etcd
```bash
wget -q --show-progress --https-only --timestamping \
"https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
```
Розпакувати і помістити etcd у диреторію /usr/local/bin/
```
Decompres and install it to the proper folder
```bash
{
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
}
```
```
When etcd is installed, we need to move our generated certificates to the proper folder
```bash
{
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chmod 700 /var/lib/etcd
@ -128,7 +140,9 @@ wget -q --show-progress --https-only --timestamping \
}
```
```
Create etcd service configuration file
```bash
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
@ -137,11 +151,11 @@ Documentation=https://github.com/coreos
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--client-cert-auth \\
--name etcd \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--client-cert-auth \\
--listen-client-urls https://127.0.0.1:2379 \\
--advertise-client-urls https://127.0.0.1:2379 \\
--data-dir=/var/lib/etcd
@ -153,6 +167,18 @@ WantedBy=multi-user.target
EOF
```
Configuration options specified:
- client-cert-auth - this configuration option tels etcd to enable the authentication of clients using SSL/TLS client certificates. When client-cert-auth is enabled, etcd requires that clients authenticate themselves by presenting a valid SSL/TLS client certificate during the TLS handshake. This certificate must be signed by a trusted certificate authority (CA) and include the client's identity information
- name - used to specify the unique name of an etcd member
- cert-file - path to the SSL/TLS certificate file that the etcd server presents to clients during the TLS handshake
- key-file - path to the SSL/TLS private key file that corresponds to the SSL/TLS certificate presented by the etcd server during the TLS handshake
- trusted-ca-file - path to the ca file which will be used by etcd to validate client certificate
- listen-client-urls - specifies the network addresses on which the etcd server listens for client requests
- specifies the network addresses that the etcd server advertises to clients for connecting to the server
- data-dir - directory where etcd stores its data, including the key-value pairs in the etcd key-value store, snapshots, and transaction logs
And finally we need to run our etcd service
```bash
{
sudo systemctl daemon-reload
@ -161,10 +187,12 @@ EOF
}
```
And ensure that etcd is up and running
```bash
systemctl status etcd
```
Output:
```
● etcd.service - etcd
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
@ -178,6 +206,7 @@ systemctl status etcd
...
```
When etcd is up and running we can check wheather we can connact to it.
```
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
@ -186,9 +215,10 @@ sudo ETCDCTL_API=3 etcdctl member list \
--key=/etc/etcd/kubernetes-key.pem
```
Output:
Результат:
```bash
8e9e05c52164694d, started, etcd, http://localhost:2380, https://127.0.0.1:2379, false
```
Next: [Api Server](./docs/05-apiserver.md)
Next: [Api Server](./05-apiserver.md)

View File

@ -267,4 +267,4 @@ hello-world 0/1 Pending 0 29s
в дійсності, зоч у нас кублєт і є але він про сервер нічого незнає а сервер про нього
потрібно цю проблємку вирішити
Next: [Apiserver - Kubelet integration](./docs/06-apiserver-kubelet.md)
Next: [Apiserver - Kubelet integration](./06-apiserver-kubelet.md)

View File

@ -243,4 +243,4 @@ Hello, World!
ух ти тепер точно все працює, можна користуватись кубернетесом, але стоп у нас все ще є речі які на нашому графіку сірі давайте розбиратись
Next: [Controller manager](./docs/07-controller-manager.md)
Next: [Controller manager](./07-controller-manager.md)

View File

@ -202,4 +202,4 @@ nginx-deployment-5d9cbcf759-x4pk8 0/1 Pending 0 5m22s <none>
бачимо що йому ніхто ще не проставив ноду, а без ноди кублєт сам не запустить под
Next: [Scheduler](./docs/08-scheduler.md)
Next: [Scheduler](./08-scheduler.md)

View File

@ -148,4 +148,4 @@ Hello, World from deployment!
...
```
Next: [Kube proxy](./docs/09-kubeproxy.md)
Next: [Kube proxy](./09-kubeproxy.md)

View File

@ -310,4 +310,4 @@ written to stdout
```
ух ти у нас все вийшло
Next: [DNS in Kubernetes](./docs/10-dns.md)
Next: [DNS in Kubernetes](./10-dns.md)