Update spelling

pull/863/head
rsavchuk 2023-06-08 22:25:56 +02:00
parent 1b6b7be592
commit 0d2e3d93d1
9 changed files with 200 additions and 166 deletions

View File

@ -28,9 +28,9 @@ After the download process is complete, we need to move runc binaries to proper
```bash
{
sudo mv runc.amd64 runc
chmod +x runc
sudo mv runc /usr/local/bin/
sudo mv runc.amd64 runc
chmod +x runc
sudo mv runc /usr/local/bin/
}
```
@ -38,14 +38,14 @@ Now, as we have runc configured, we can run busybox container
```bash
{
mkdir -p ~/busybox-container/rootfs/bin
cd ~/busybox-container/rootfs/bin
wget https://www.busybox.net/downloads/binaries/1.31.0-defconfig-multiarch-musl/busybox-x86_64
chmod +x busybox-x86_64
./busybox-x86_64 --install .
cd ~/busybox-container
runc spec
sed -i 's/"sh"/"echo","Hello from container runned by runc!"/' config.json
mkdir -p ~/busybox-container/rootfs/bin
cd ~/busybox-container/rootfs/bin
wget https://www.busybox.net/downloads/binaries/1.31.0-defconfig-multiarch-musl/busybox-x86_64
chmod +x busybox-x86_64
./busybox-x86_64 --install .
cd ~/busybox-container
runc spec
sed -i 's/"sh"/"echo","Hello from container runned by runc!"/' config.json
}
```
@ -63,8 +63,8 @@ Hello from container runned by runc!
Great, we created our first container in this tutorial. Now we will clean up our workspace.
```bash
{
cd ~
rm -r busybox-container
cd ~
rm -r busybox-container
}
```
@ -88,9 +88,9 @@ After download process complete, we need to unzip and move containerd binaries t
```bash
{
mkdir containerd
tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
sudo mv containerd/bin/* /bin/
mkdir containerd
tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
sudo mv containerd/bin/* /bin/
}
```
@ -233,8 +233,8 @@ ctr containers rm busybox-container
We can check that list of containers and tasks should be empty
```bash
{
ctr task ls
ctr containers ls
ctr task ls
ctr containers ls
}
```

View File

@ -16,6 +16,8 @@ Previously we worked with containers, but on this step, we will work with other
As you remember at the end, kubernetes usually start pods. So now we will try to create it. But it is a bit not the usual way, instead of using kubernetes api (which we didn't configure yet), we will create pods with the usage of kubelet only.
To do that we will use the [static pods](https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/) functionality.
## configure
So, let's begin.
First of all, we need to download kubelet.
@ -91,14 +93,16 @@ Output:
...
```
## verify
After kubelet service is up and running, we can start creating our pods.
Before we will create static pod manifests, we need to create folders where we will place our pods (same as we configured in kubelet)
```bash
{
mkdir /etc/kubernetes
mkdir /etc/kubernetes/manifests
mkdir /etc/kubernetes
mkdir /etc/kubernetes/manifests
}
```
@ -250,8 +254,8 @@ rm /etc/kubernetes/manifests/static-pod.yml
It takes some time to remove the pods, we can ensure that pods are deleted by running
```bash
{
crictl pods
crictl ps
crictl pods
crictl ps
}
```

View File

@ -4,7 +4,7 @@ Now, we know how kubelet runs containers and we know how to run pod without othe
Let's experiment with static pod a bit.
We will create static pod, but this time we will run nginx, instead of busybox
We will create a static pod, but this time we will run nginx, instead of busybox
```bash
cat <<EOF> /etc/kubernetes/manifests/static-nginx.yml
apiVersion: v1
@ -21,7 +21,7 @@ spec:
EOF
```
After manifest created we can check wheather our nginx container is created
After the manifest is created we can check whether our nginx container is created
```bash
crictl pods
@ -33,8 +33,8 @@ POD ID CREATED STATE NAME
14662195d6829 About a minute ago Ready static-nginx-example-server default 0 (default)
```
As we can see out nginx container is up and running.
Let's check wheather it works as expected.
As we can see our nginx container is up and running.
Let's check whether it works as expected.
```bash
curl localhost
@ -67,7 +67,7 @@ Commercial support is available at
</html>
```
Now, lets try to create 1 more nginx container.
Now, let's try to create 1 more Nginx container.
```bash
cat <<EOF> /etc/kubernetes/manifests/static-nginx-2.yml
apiVersion: v1
@ -84,7 +84,7 @@ spec:
EOF
```
Again will try to check if our pod is in running state.
Again will try to check if our pod is in a running state
```bash
crictl pods
@ -97,13 +97,13 @@ a299a86893e28 40 seconds ago Ready static-nginx-2-examp
14662195d6829 4 minutes ago Ready static-nginx-example-server default 0 (default)
```
Looks like out pod is up, but if we will try to check the underlying containers we may be surprised.
Looks like our pod is up, but if we will try to check the underlying containers we may be surprised.
```bash
crictl ps -a
```
Output:
```
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
9e8cb98b87aed 6efc10a0510f1 42 seconds ago Exited nginx 3 b013eca0e9d33
@ -111,13 +111,13 @@ CONTAINER IMAGE CREATED STATE
```
As you can see our second container is in exit state.
To check the reason of the Exit state we can review container logs
To check the reason for the exit state we can review the container logs
```bash
crictl logs $(crictl ps -q -s Exited)
```
In the logs, you shoud see something like this
Output:
```
...
2023/04/18 20:49:47 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
@ -125,11 +125,10 @@ nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
...
```
As we can see, the reason of the exit state - adress already in use.
Our address already in use by our other container.
As we can see, the reason for the exit state - the address is already in use.
The Nginx container tries to use the port that is already in use by another (first) Nginx other container.
We received this error because we run two pods which require an access to the same port on our server.
This was done by specifying
We received this error because we run two Nginx applications that use the same host. That was done by specifying
```
...
spec:
@ -137,9 +136,9 @@ spec:
...
```
This option runs our container on our host without any network isolation (almost the same as running two nginx without on the same host without containers)
This option says kubelet that containers created should be run on the host without any network isolation (almost the same as running two nginx on the same host without containers)
Now we will try to update our pod manifests to run our containers in separate network "namespaces"
Now we will try to update our pod manifests to run containers in separate network namespaces
```bash
{
cat <<EOF> /etc/kubernetes/manifests/static-nginx.yml
@ -170,9 +169,9 @@ EOF
}
```
As you can see we simply removed hostNetwork: true configuration option.
As you can see we removed the "hostNetwork: true" configuration option.
So, lets check what we have
So, let's check what we have
```bash
crictl pods
```
@ -182,8 +181,8 @@ Output:
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
```
Very strange, we see nothing.
To define the reason why no pods created lets review the kubelet logs (but as we know what we are looking for, we will chit a bit)
We see nothing.
To define the reason why no pods were created let's review the logs
```bash
journalctl -u kubelet | grep NetworkNotReady
```
@ -195,25 +194,24 @@ May 03 13:43:43 example-server kubelet[23701]: I0503 13:43:43.862719 23701 eve
...
```
As we can see cni plugin is not initialized. But what is cni plugin.
As we can see cni plugin is not initialized. But what is cni plugin?
> CNI stands for Container Networking Interface. It is a standard for defining how network connectivity is established and managed between containers, as well as between containers and the host system in a container runtime environment. Kubernetes uses CNI plugins to implement networking for pods.
> A CNI plugin is a binary executable that is responsible for configuring the network interfaces and routes of a container or pod. It communicates with the container runtime (such as Docker or CRI-O) to set up networking for the container or pod.
As we can see kubelet can't configure network for pod by himself (or with the help of containerd). Same as with containers, to configure network kubelet use some 'protocol' to communicate with 'someone' who can configure networ.
As we can see kubelet can't configure the network for a pod by himself (or with the help of containerd). Same as with containers, to configure a network kubelet uses some 'protocol' to communicate with 'someone' who can configure a network.
Now, we will configure the cni plugin for our kubelet.
Now, we will configure the cni plugin.
First of all we need to download that plugin
First of all, we need to download that plugin
```bash
wget -q --show-progress --https-only --timestamping \
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz
```
Now, we will create proper folders structure for our plugin
Now, we will create proper folders structure
```bash
sudo mkdir -p \
/etc/cni/net.d \
@ -221,17 +219,16 @@ sudo mkdir -p \
```
here:
- net.d - folder where we will store our plugin configuration files
- net.d - folder where plugin configuration files stored
- bin - folder for plugin binaries
Now, we will untar our plugin to proper folder
Now, we will untar the plugin to the proper folder
```bash
sudo tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
```
And create plugin configuration
```bash
{
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
@ -262,11 +259,11 @@ EOF
}
```
Of course all configuration options here important, but I want to highlight 2 of them:
- ranges - information about subnets from shich ip addresses will be assigned for our pods
- routes - information on how to route trafic between nodes, as we have single node kubernetes cluster the configuration is very easy
Of course, all configuration options here are important, but I want to highlight 2 of them:
- ranges - information about subnets from which IP addresses will be assigned for pods
- routes - information on how to route traffic between nodes. As we have single node kubernetes cluster the configuration is very easy
Update our kubelet config (add network-plugin configuration option)
Update the kubelet config (add network-plugin configuration option)
```bash
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
@ -293,7 +290,7 @@ WantedBy=multi-user.target
EOF
```
After kubelet reconfigured, we can restart it
After the kubelet is reconfigured, we can restart it
```bash
{
sudo systemctl daemon-reload
@ -319,7 +316,7 @@ Output:
└─86730 /usr/local/bin/kubelet --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --image-pull-progress-deadline=2m --file-che>
```
Now, after all fixes applyed and we have working kubelet, we can check wheather our pods created
Now, after all fixes applied and we have a working kubelet, we can check whether the pods created
```bash
crictl pods
```
@ -332,7 +329,6 @@ b9c684fa20082 2 minutes ago Ready static-nginx-example
```
Pods are ok, but what about containers
```bash
crictl ps
```
@ -346,15 +342,15 @@ CONTAINER IMAGE CREATED STATE
They are also in running state
On this step if we will try to curl localhost nothing will happen.
Our pods are runned in separate network namespaces, and each pod has its own ip address.
In this step, if we will try to curl localhost, nothing will happen.
Our pods are run in separate network namespaces, and each pod has its own IP address.
We need to define it.
```bash
{
PID=$(crictl pods --label app=static-nginx-2 -q)
CID=$(crictl ps -q --pod $PID)
crictl exec $CID ip a
PID=$(crictl pods --label app=static-nginx-2 -q)
CID=$(crictl ps -q --pod $PID)
crictl exec $CID ip a
}
```
@ -370,15 +366,15 @@ Output:
...
```
During the plugin configuration we remember that we configure the subnet pod our pods to be 10.240.1.0/24. So, the container received its IP from the range specified, in my case it was 10.240.1.1.
During plugin configuration, we remember that we configure the pod's subnet to 10.240.1.0/24. So, the container received its IP from the range specified, in my case, it was 10.240.1.1.
So, lets try to curl our container.
So, let's try to curl the container.
```bash
{
PID=$(crictl pods --label app=static-nginx-2 -q)
CID=$(crictl ps -q --pod $PID)
IP=$(crictl exec $CID ip a | grep 240 | awk '{print $2}' | cut -f1 -d'/')
curl $IP
PID=$(crictl pods --label app=static-nginx-2 -q)
CID=$(crictl ps -q --pod $PID)
IP=$(crictl exec $CID ip a | grep 240 | awk '{print $2}' | cut -f1 -d'/')
curl $IP
}
```
@ -409,13 +405,12 @@ Commercial support is available at
</html>
```
As we can see we successfully reached out container from our host.
As we can see we successfully reached out container from the host.
But we remember that cni plugin also responsible to configure communication between containers.
Lets check
But we remember that cni plugin is also responsible to configure communication between containers.
Let's check
To do that we will run 1 more pod with busybox inside
```bash
cat <<EOF> /etc/kubernetes/manifests/static-pod.yml
apiVersion: v1
@ -433,7 +428,7 @@ spec:
EOF
```
Now, lets, check and ensure that pod created
Now, let's check and ensure that the pod created
```bash
crictl pods
@ -447,16 +442,16 @@ a6881b7bba036 18 minutes ago Ready static-nginx-example
4dd70fb8f5f53 18 minutes ago Ready static-nginx-2-example-server default 0 (default)
```
As pod is in running state, we can check wheather our other nging pods are available
As the pod is in a running state, we can check whether the other nginx pod are available
```bash
{
PID=$(crictl pods --label app=static-nginx-2 -q)
CID=$(crictl ps -q --pod $PID)
IP=$(crictl exec $CID ip a | grep 240 | awk '{print $2}' | cut -f1 -d'/')
PID_0=$(crictl pods --label app=static-pod -q)
CID_0=$(crictl ps -q --pod $PID_0)
crictl exec $CID_0 wget -O - $IP
PID=$(crictl pods --label app=static-nginx-2 -q)
CID=$(crictl ps -q --pod $PID)
IP=$(crictl exec $CID ip a | grep 240 | awk '{print $2}' | cut -f1 -d'/')
PID_0=$(crictl pods --label app=static-pod -q)
CID_0=$(crictl ps -q --pod $PID_0)
crictl exec $CID_0 wget -O - $IP
}
```
@ -493,15 +488,14 @@ written to stdout
As we can see we successfully reached our container from busybox.
In this section we configured CNI plugin for our intallation and now we can run pods which can communicate with each other over the network.
In this section, we configured the cni plugin. Now we can run pods that can communicate with each other over the network.
In nest section we will procede with the kubernetes cluster configuration, but before, we need to clean up workspace.
Now we clean up the workspace
```bash
rm /etc/kubernetes/manifests/static-*
```
And check if app pods are removed
```bash
crictl pods
```
@ -511,4 +505,6 @@ Output:
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
```
Note: it takes some time to remove all created resources.
Next: [ETCD](./04-etcd.md)

View File

@ -1,36 +1,41 @@
# ETCD
At this point we already know that we can run pods even withour API server. But current aproach is not very confortable to use, to create pod we need to place some manifest in some place. It is not very comfortable to manage. Now we will start our jorney of configuring "real" (more real than current, because current doesn't look like kubernetes at all) kubernetes. And of course we need to start with the storage.
At this point, we already know that we can run pods even without an API server. To create a pod we need to place some manifest in some place. It is not very comfortable to manage. Now we will start configuring "real" (more real than current, because current doesn't look like kubernetes at all) kubernetes cluster.
![image](./img/04_cluster_architecture_etcd.png "Kubelet")
For kubernetes (at least for original one if I can say so) we need to configura database called [etcd](https://etcd.io/).
For kubernetes (at least for the original one if I can say so) we need to configure a database called [etcd](https://etcd.io/).
>etcd is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines. It gracefully handles leader elections during network partitions and can tolerate machine failure, even in the leader node.
Our etcd will be configured as single node database with authentication (by useage of client cert file).
Our etcd will be configured as a single node database with authentication.
So, lets start.
So, let's start.
As I already said, communication with our etcd cluster will be secured, it means that we need to generate some keys, to encrypt all the trafic.
To do so, we need to download tools which may help us to generate certificates
## certificates
We will configure etcd to authenticate users by the certificate file used during communication.
To do so, we need to generate some certs.
We will create certificate files using cfssl and cfssljson tools (that should be installed before we start)
First of all, we will download the tools mentioned
```bash
wget -q --show-progress --https-only --timestamping \
https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssl_1.4.1_linux_amd64 \
https://github.com/cloudflare/cfssl/releases/download/v1.4.1/cfssljson_1.4.1_linux_amd64
```
And install
And install them
```bash
{
mv cfssl_1.4.1_linux_amd64 cfssl
mv cfssljson_1.4.1_linux_amd64 cfssljson
chmod +x cfssl cfssljson
sudo mv cfssl cfssljson /usr/local/bin/
mv cfssl_1.4.1_linux_amd64 cfssl
mv cfssljson_1.4.1_linux_amd64 cfssljson
chmod +x cfssl cfssljson
sudo mv cfssl cfssljson /usr/local/bin/
}
```
After the tools installed successfully, we need to generate ca certificate.
After the tools are installed successfully, we need to generate ca certificate.
A ca (Certificate Authority) certificate, also known as a root certificate or a trusted root certificate, is a digital certificate that is used to verify the authenticity of other certificates.
```bash
@ -125,21 +130,7 @@ kubernetes-key.pem
kubernetes.pem
```
Now, we have all required certificates, so, lets download etcd
```bash
wget -q --show-progress --https-only --timestamping \
"https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
```
After donload complete, we can move etcd binaries to proper folders
```bash
{
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
}
```
Now, we can start wioth the configurations of the etcd service. First of all, we need to discribute previuosly generated certificates to the proper folder
And distribute certificate files created
```bash
{
sudo mkdir -p /etc/etcd /var/lib/etcd
@ -150,7 +141,23 @@ Now, we can start wioth the configurations of the etcd service. First of all, we
}
```
Create etcd service configuration file
## configure
Now, we have all the required certificates, so, let's download etcd
```bash
wget -q --show-progress --https-only --timestamping \
"https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
```
After the download is complete, we can move etcd binaries to the proper folders
```bash
{
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
}
```
Now, we can configure etcd service
```bash
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
@ -186,12 +193,12 @@ Configuration options specified:
- advertise-client-urls - specifies the network addresses that the etcd server advertises to clients for connecting to the server
- data-dir - directory where etcd stores its data, including the key-value pairs in the etcd key-value store, snapshots, and transaction logs
And finally we need to run our etcd service
And finally, we need to run our etcd service
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
}
```
@ -200,7 +207,7 @@ To ensure that our service successfully started, run
systemctl status etcd
```
The output should be similar to
Output:
```
● etcd.service - etcd
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
@ -214,7 +221,9 @@ The output should be similar to
...
```
Now, when etcd is up and running, we can check wheather we can communicate with it
## verify
When etcd is up and running, we can check whether we can communicate with it
```
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
@ -228,6 +237,6 @@ Output:
8e9e05c52164694d, started, etcd, http://localhost:2380, https://127.0.0.1:2379, false
```
As you can see, to communicate with our etcd service, we specified cert and key file, this the the same file we used to configure etcd, it is only to simplity our deployment, in real life, we can use different certificate which is signed by the same ca file.
As you can see, to communicate with the etcd service, we specified a cert and key file, this is the same file we used to configure etcd, it is only to simplify our deployment, in real life, we can use a different certificate which is signed by the same ca file.
Next: [Api Server](./05-apiserver.md)

View File

@ -1,16 +1,16 @@
# Api Server
In this section we will configure kubernetes API server.
In this section, we will configure kubernetes API server.
> The Kubernetes API server validates and configures data for the api objects which include pods, services, replicationcontrollers, and others. The API Server services REST operations and provides the frontend to the cluster's shared state through which all other components interact.
As you can see from the description adpi server is central (not main) component of kubernetes cluster.
As you can see from the description, api server is a central (not the main) component of kubernetes cluster.
![image](./img/05_cluster_architecture_apiserver.png "Kubelet")
## certificates
Before we begin with configuration of API server, we need to create certificates for kubernetes that will be used to sign service account tokens.
Before we begin with the configuration of the api server, we need to create certificates for kubernetes that will be used to sign service account tokens.
```bash
{
cat > service-account-csr.json <<EOF
@ -41,27 +41,27 @@ cfssl gencert \
}
```
Now, we need to distbibute certificates to the api server configuration folder
Now, we need to distribute certificates to the api server configuration folder
```bash
{
mkdir /var/lib/kubernetes/
sudo cp \
ca.pem \
kubernetes.pem kubernetes-key.pem \
service-account-key.pem service-account.pem \
/var/lib/kubernetes/
mkdir /var/lib/kubernetes/
sudo cp \
ca.pem \
kubernetes.pem kubernetes-key.pem \
service-account-key.pem service-account.pem \
/var/lib/kubernetes/
}
```
As you can see, in addition to the generated service-account certificate file, we also distributed certificate generated in [previous](./04-etcd.md) section. We will use that certificate for communication between
As you can see, in addition to the generated service-account certificate file, we also distributed the certificate generated in the [previous](./04-etcd.md) section. We will use that certificate for communication between
- api server and etcd
- as certificate when comunication with api server
Also, we will use ca file to validate certificate files of the other components wo comminucate with api server.
Also, we will use the ca file to validate the certificate files of the other components that communicate with the api server.
## data encryption
Also, we will configure api server to encrypt data sensitive before saving it to the etcd database. To do that we need to create encryption config file.
Also, we will configure the api server to encrypt sensitive data (secrets) before saving it to the etcd database. To do that we need to create the encryption config file.
```bash
{
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
@ -81,24 +81,24 @@ resources:
EOF
}
```
This config days kubernetes to encrypt secrets when storing it in etcd with the usage of aescbc encryption provider.
This config says api server to encrypt secrets before storing them in the etcd (with the usage of aescbc encryption provider).
## service configuration
Now, when all required configuration/certificate files created and distributed to the proper folders, we can downlad binaries and enable api server as service.
Now, when all required configuration/certificate files are created and distributed to the proper folders, we can download binaries and enable api server as a service.
First of all we need to download and install api server binaries
First of all, we need to download and install api server binaries
```bash
{
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver"
chmod +x kube-apiserver
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver"
chmod +x kube-apiserver
sudo mv kube-apiserver /usr/local/bin/
}
```
And create service configuration file
And create the service configuration file
```bash
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
@ -150,14 +150,13 @@ Configuration options I want to highlight:
Now, when api-server service is configured, we can start it
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl start kube-apiserver
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl start kube-apiserver
}
```
And check service status
And check the service status
```bash
sudo systemctl status kube-apiserver
```
@ -176,9 +175,8 @@ Output:
...
```
## communication with api server
Now, when our server is up and running, we want to communicate with it. To do that we will use cubectl tool. So lets download and install it
## verify
Now, when our server is up and running, we want to communicate with it. To do that we will use kubectl tool. So let's download and install it
```bash
wget -q --show-progress --https-only --timestamping \
@ -187,8 +185,7 @@ wget -q --show-progress --https-only --timestamping \
&& sudo mv kubectl /usr/local/bin/
```
As our server is configured to use RBAC authorization, we need to authirize to our server in somehow. To do that, we will generate certificate file which will be signed by ca cert, and have "admin" CN property.
As the api server is configured in more or less secure mode, we need to provide some credentials when accessing it. We will use certificate files as the credentials. That is why we need to generate a proper certificate file that will allow us to access api server with administrator privileges
```bash
{
cat > admin-csr.json <<EOF
@ -219,7 +216,7 @@ cfssl gencert \
}
```
Now, when our certificate file generated, we can use it in kubectl. To do that we will update default kubectl config file (actually we will create it) to use the proper certs and connection options.
Now, when our certificate file is generated, we can use it in kubectl. To do that we will update the default kubectl config file (actually we will create it) to use the proper certs and connection options.
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
@ -240,8 +237,7 @@ kubectl config use-context default
}
```
Now, we should be able to receive our cluster and kubeclt info
Now, we should be able to receive the cluster and kubeclt info
```bash
kubectl version
```
@ -252,8 +248,8 @@ Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCom
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
```
As I already mentioned, api-server is central kubernetes component, which stores information about all kubernetes objects, it means that we can create pod, even when other components (kubelet, scheduler, controller manager) not configured
As already mentioned, api-server is the central kubernetes component, that stores information about all kubernetes objects.
It means that we can create a pod, even when other components (kubelet, scheduler, controller manager) are not configured
```bash
{
HOST_NAME=$(hostname -a)
@ -285,7 +281,7 @@ kubectl apply -f pod.yaml
}
```
Note: as you can see, in addition to the pod, we create service account associated with our pod. This step is needed as we have now default service account created in default namespace (service account controller is responsible to create it, but we didn't configure controller manager yet).
Note: as you can see, in addition to the pod, we create the service account associated with the pod. This step is needed as we have no default service account created in the default namespace (the service account controller is responsible to create it, but we didn't configure the controller manager yet).
To check pod status run
```bash
@ -298,9 +294,9 @@ NAME READY STATUS RESTARTS AGE
hello-world 0/1 Pending 0 29s
```
As expected we received pod in pending state, because we have now kubelet configured to run pods created in API server.
As expected we received the pod in a pending state, because we have now kubelet configured to run pods created in API server.
To ensure we can check it
We can veryfy that by running
```bash
kubectl get nodes
```

View File

@ -98,7 +98,7 @@ And now, move all our configuration settings to the proper folders
sudo cp kubelet.kubeconfig /var/lib/kubelet/kubeconfig
```
Alsom we need to create KubeletConfiguration
Also, we need to create KubeletConfiguration
```bash
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
@ -293,6 +293,8 @@ Hello, World!
As you can see, we can create pods and kubelet will run that pods.
Note: it takes some time to apply created RBAC policies.
Now, we need to clean-up out workspace.
```bash
kubectl delete -f pod.yaml
@ -308,5 +310,4 @@ Outpput:
No resources found in default namespace.
```
Next: [Scheduler](./07-controller-manager.md)
Next: [Controller manager](./07-controller-manager.md)
Next: [Scheduler](./07-scheduler.md)

View File

@ -221,7 +221,13 @@ hello-world 0/1 Pending 0 24m <none> <none> <none>
As you can see, our pod still in pending mode.
To define the reason of this, we will review the logs of our scheduler.
```bash
journalctl -u kube-scheduler | grep not-ready
```
Output:
```
...
May 21 20:52:25 example-server kube-scheduler[91664]: I0521 20:52:25.471604 91664 factory.go:338] "Unable to schedule pod; no fit; waiting" pod="default/hello-world" err="0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate."
...
@ -261,4 +267,14 @@ Now we need to clean-up our wirkspace
kubectl delete -f pod.yaml
```
Check if pod deleted
```bash
kubectl get pod
```
Outpput:
```
No resources found in default namespace.
```
Next: [Controller manager](./08-controller-manager.md )

View File

@ -237,4 +237,14 @@ Now, when our controller manager configured, lets clean up our workspace.
kubectl delete -f deployment.yaml
```
Check if pod created by deployment deleted
```bash
kubectl get pod
```
Outpput:
```
No resources found in default namespace.
```
Next: [Kube-proxy](./09-kubeproxy.md)

View File

@ -142,6 +142,8 @@ writing to stdout
written to stdout
```
Note: usually, it takes some time to apply all RBAC policies
Note: it take some time to apply user permission. During this you can steel see permission error.
As you can see, we successfully received the response from the nginx. But to do that we used the IP address of the pod. To solve service discovery issue, kubernetes has special component - service. Now we will create it.
@ -256,8 +258,8 @@ Now, we can distribute created configuration file.
```bash
{
sudo mkdir -p /var/lib/kube-proxy
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
sudo mkdir -p /var/lib/kube-proxy
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
}
```
@ -271,8 +273,8 @@ wget -q --show-progress --https-only --timestamping \
And install it
```bash
{
chmod +x kube-proxy
sudo mv kube-proxy /usr/local/bin/
chmod +x kube-proxy
sudo mv kube-proxy /usr/local/bin/
}
```