але стаять, а як ми на нього потрапимо, із першим все було просто ми просто пішли на 80 порт, хоча також питання зо там взагалі відбулось
Again will try to check if our pod is in running state.
```bash
crictl pods
```
Output:
```
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
a299a86893e28 40 seconds ago Ready static-nginx-2-example-server default 0 (default)
14662195d6829 4 minutes ago Ready static-nginx-example-server default 0 (default)
```
Looks like out pod is up, but if we will try to check the underlying containers we may be surprised.
поки не особо ясно, але у нас є чотка утіліта для дебага
то давайте глянемо що там за контейнери існують
```bash
```bash
crictl ps -a
crictl ps -a
@ -93,12 +112,14 @@ CONTAINER IMAGE CREATED STATE
0e47618b39c09 6efc10a0510f1 4 minutes ago Running nginx 0 e8720dee2b08b
0e47618b39c09 6efc10a0510f1 4 minutes ago Running nginx 0 e8720dee2b08b
```
```
такс, що за неподобство, чому 1 контейнер не запущений, давайте розбиратись, і куди першим ділом потрібно лізти, звісно у логи
As you can see our second container is in exit state.
To check the reason of the Exit state we can review container logs
```bash
```bash
crictl logs $(crictl ps -q -s Exited)
crictl logs $(crictl ps -q -s Exited)
```
```
In the logs, you shoud see something like this
```
```
...
...
2023/04/18 20:49:47 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
2023/04/18 20:49:47 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
@ -106,29 +127,112 @@ nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
...
...
```
```
такс, біда, це що виходить що ми не можемо запусти 2 нджінкса?
As we can see, the reason of the exit state - adress already in use.
всьо розхожимось, докер в 100 раз лучше)))
Our address already in use by our other container.
але ні, не все так просто
справа в тому що окім всього існує ще такий собі стандарт який говорить як алаштовувати мережу для контейнера - CNI
We received this error because we run two pods with configuration
а ми його поки ніяким чином не конфігурували, і похорошому всі ті контейнери не мали б створитись якби не 1 чіт, ми їх всіх створювали у хостівій мережі, так якби ми просто запускаємо якусь програму, тож давайте зараз налаштуємо якийсь мережевий плагін
```
...
spec:
hostNetwork: true
...
```
As we can see our pod are runned in host network.
Lets try to fix this by updating our manifests to run containers in not host network.
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
```
Very strange, we see nothing.
To define the reason why no pods created lets review the kubelet logs (but as we know what we are looking for, we will chit a bit)
```bash
journalctl -u kubelet | grep NetworkNotReady
```
Output:
```
...
May 03 13:43:43 example-server kubelet[23701]: I0503 13:43:43.862719 23701 event.go:291] "Event occurred" object="default/static-nginx-example-server" kind="Pod" apiVersion="v1" type="Warning" reason="NetworkNotReady" message="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
...
```
As we can see cni plugin is not initialized. But what is cni plugin.
> CNI stands for Container Networking Interface. It is a standard for defining how network connectivity is established and managed between containers, as well as between containers and the host system in a container runtime environment. Kubernetes uses CNI plugins to implement networking for pods.
> A CNI plugin is a binary executable that is responsible for configuring the network interfaces and routes of a container or pod. It communicates with the container runtime (such as Docker or CRI-O) to set up networking for the container or pod.
As we can see kubelet can't configure network for pod by himself, same as with containers, to configure network kubelet use some 'protocol' to communicate with 'someone' who can configure networ.
Now, we will configure the cni plugin 1for our instalation.