# Apiserver - Kubelet integration In this section, we will configure kubelet, to run not only static pods but also pods, created in API server. ![image](./img/06_cluster_architecture_apiserver_kubelet.png "Kubelet") ## certificates Again we will start this part with the creation of the certificates which will be used by kubelet to communicate with api server, and also when api server will communicate with kubelet. ```bash { HOST_NAME=$(hostname -a) cat > kubelet-csr.json < ... ``` After our kubelet is in running state, we can check if it is registered in API server ```bash kubectl get nodes ``` Output: ``` NAME STATUS ROLES AGE VERSION example-server Ready 1m2s v1.21.0 ``` After our kubelet registered in API server we can check wheather our pod is in running state ```bash kubectl get pod ``` Output: ``` NAME READY STATUS RESTARTS AGE hello-world 1/1 Running 0 8m1s ``` As we can see out pod is in running state. In addition to this check we can also check if our pod is really in running state by using crictl Pods ```bash crictl pods ``` Output: ``` POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME 1719d0202a5ef 8 minutes ago Ready hello-world default 0 (default) ``` Also we can see logs from our pod ```bash crictl logs $(crictl ps -q) ``` Output: ``` Hello, World! Hello, World! Hello, World! Hello, World! ... ``` But now, lets view logs using kubectl instead of crictl. In our case it is maybe not very important, but in case of cluster with more than 1 node it is important, as crictl allows to read info only about pods on node, when kubectl (by communicating with api server) allows to read info from all nodes. ```bash kubectl logs hello-world ``` Output: ``` Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy) ( pods/log hello-world) ``` As we can see api server has no permissions to read logs from the node. This message apears, because during authorization, kubelet ask api server if the user with the name kubernetes has proper permission, but now it is not true. So let's fix this ```bash { cat < node-auth.yml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: node-proxy-access rules: - apiGroups: [""] resources: ["nodes/proxy"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: node-proxy-access-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: node-proxy-access subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kubernetes EOF kubectl apply -f node-auth.yml } ``` After our cluster role and role binding creted we can retry ```bash kubectl logs hello-world ``` Output: ``` Hello, World! Hello, World! Hello, World! Hello, World! ... ``` As you can see, we can create pods and kubelet will run that pods. Note: it takes some time to apply created RBAC policies. Now, we need to clean-up out workspace. ```bash kubectl delete -f pod.yaml ``` Check if pod deleted ```bash kubectl get pod ``` Outpput: ``` No resources found in default namespace. ``` Next: [Scheduler](./07-scheduler.md)