finalyze docker support

This commit is contained in:
Ruslan Savchuk
2025-04-16 22:07:20 +02:00
parent 39a5cc646d
commit 84a7bd6f1a
16 changed files with 89 additions and 1148 deletions

52
docs/00-docker.md Normal file
View File

@@ -0,0 +1,52 @@
# Build container image
Create dockerfile for the container
```bash
cat <<EOF | tee Dockerfile
FROM ubuntu:22.04
RUN apt update \
&& apt install -y wget systemd kmod systemd-sysv vim less iptables \
&& rm -rf /var/lib/apt/lists/*
RUN systemctl set-default multi-user.target
RUN find /etc/systemd/system /lib/systemd/system \
-path '*.wants/*' -not -name '*systemd*' -exec rm -f {} \;
CMD mkdir /workdir
WORKDIR /workdir
ENTRYPOINT ["/lib/systemd/systemd", "--system"]
EOF
```
Build container image
```bash
docker build -t ubuntu-systemd .
```
Run created container image
```bash
docker run -d \
--name ubuntu-systemd-container \
--privileged \
--security-opt seccomp=unconfined \
--security-opt apparmor=unconfined \
-v /sys/fs/cgroup:/sys/fs/cgroup:rw \
--tmpfs /tmp \
--tmpfs /run \
--tmpfs /run/lock \
ubuntu-systemd
```
And now we need to run bash inside container
```bash
docker exec -it ubuntu-systemd-container bash
```
Next: [Kubernetes architecture](./00-kubernetes-architecture.md)

View File

@@ -42,7 +42,7 @@ mkdir -p busybox-container/rootfs/bin \
&& ./busybox-x86_64 --install . \
&& cd ./../.. \
&& runc spec \
&& sed -i 's/"sh"/"echo","Hello from container runned by runc!","sleep","3600"/' config.json
&& sed -i 's/"sh"/"echo","Hello from container runned by runc!"/' config.json
```
In this step, we downloaded the busybox image, unarchived it, and created the proper files, required by runc to run the container (including container configuration and files that will be accessible from the container). So, let's run our container
@@ -190,9 +190,7 @@ docker.io/library/busybox:latest application/vnd.docker.distribution.manifest.li
Now, let's start our container
```bash
ctr run --rm --snapshotter native docker.io/library/busybox:latest busybox-container sh -c 'echo "Hello"'
ctr run --detach --runtime io.containerd.runc.v2 --snapshotter native docker.io/library/busybox:latest busybox-container sh -c 'sleep 3600'
ctr run --detach docker.io/library/busybox:latest busybox-container sh -c 'echo "Hello from container runned by containerd!"'
ctr run --detach --snapshotter native docker.io/library/busybox:latest busybox-container sh -c 'while sleep 1; do echo "Hi"; done'
```
Output:
@@ -219,12 +217,19 @@ ctr task ls
Output:
```
TASK PID STATUS
busybox-container 2862580 STOPPED
busybox-container 2862580 RUNNING
```
As we can see our container is in the stopped state (because the command was successfully executed and the container stopped).
Now, let's clean up our workspace and go to the next section.
Stop running command
```bash
kill -9 $(ctr task ls | grep busybox | awk '{print $2}')
```
And remove the created container
```bash
ctr containers rm busybox-container
```

View File

@@ -24,22 +24,16 @@ First of all, we need to download kubelet.
```bash
wget -q --show-progress --https-only --timestamping \
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kubelet
tar -xvzf kubernetes-node-linux-amd64.tar.gz
```
After download process complete, move kubelet binaries to the proper folder
```bash
# chmod +x kubelet \
# && mv kubelet /usr/local/bin/
chmod +x kubelet \
&& mv kubelet /usr/local/bin/
```
Ensure swap is disabled
```bash
chmod +x kubernetes/node/bin/kubelet \
&& mv kubernetes/node/bin/kubelet /usr/local/bin/
```
```bash
ensure swap is disabled
swapoff -a
```

View File

@@ -37,7 +37,7 @@ As we can see our nginx container is up and running.
Let's check whether it works as expected.
```bash
curl localhost
wget -O- localhost
```
Output:
@@ -178,7 +178,9 @@ crictl pods
Output:
```
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
dd37d609e012d About a minute ago NotReady static-nginx-2-b66c13e037b3 default 0 (default)
42c3883717b2d About a minute ago NotReady static-nginx-b66c13e037b3 default 0 (default)
```
We see nothing.
@@ -390,7 +392,7 @@ So, let's try to curl the container.
PID=$(crictl pods --label app=static-nginx-2 -q)
CID=$(crictl ps -q --pod $PID)
IP=$(crictl exec $CID ip a | grep 240 | awk '{print $2}' | cut -f1 -d'/')
curl $IP
wget -O- $IP
}
```
@@ -496,10 +498,6 @@ Commercial support is available at
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Connecting to 10.240.1.4 (10.240.1.4:80)
writing to stdout
- 100% |********************************| 615 0:00:00 ETA
written to stdout
```
As we can see we successfully reached our container from busybox.

View File

@@ -214,7 +214,7 @@ Output:
## verify
When etcd is up and running, we can check whether we can communicate with it
```
```bash
ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \

View File

@@ -231,47 +231,6 @@ But now, lets view logs using kubectl instead of crictl. In our case it is maybe
kubectl logs hello-world
```
Output:
```
Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy) ( pods/log hello-world)
```
As we can see api server has no permissions to read logs from the node. This message apears, because during authorization, kubelet ask api server if the user with the name kubernetes has proper permission, but now it is not true. So let's fix this
```bash
{
cat <<EOF> node-auth.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-proxy-access
rules:
- apiGroups: [""]
resources: ["nodes/proxy"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-proxy-access-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-proxy-access
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
kubectl apply -f node-auth.yml
}
```
After our cluster role and role binding creted we can retry
```bash
kubectl logs hello-world
```
Output:
```
Hello, World!
@@ -283,8 +242,6 @@ Hello, World!
As you can see, we can create pods and kubelet will run that pods.
Note: it takes some time to apply created RBAC policies.
Now, we need to clean-up out workspace.
```bash
kubectl delete -f pod.yaml

View File

@@ -92,47 +92,6 @@ And execute command from our container
kubectl exec busy-box -- wget -O - $(kubectl get pod -o wide | grep nginx | awk '{print $6}' | head -n 1)
```
Output:
```
error: unable to upgrade connection: Forbidden (user=kubernetes, verb=create, resource=nodes, subresource=proxy)
```
This error occured, because api server has no access to execute commands. We will fix this issue, by creating cluster role and assigning it role to kubernetes user.
```bash
{
cat <<EOF | tee rbac-create.yml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-user-clusterrole
rules:
- apiGroups: [""]
resources: ["nodes/proxy"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-user-clusterrolebinding
subjects:
- kind: User
name: kubernetes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: kubernetes-user-clusterrole
apiGroup: rbac.authorization.k8s.io
EOF
kubectl apply -f rbac-create.yml
}
```
Now, we can execute command
```bash
kubectl exec busy-box -- wget -O - $(kubectl get pod -o wide | grep nginx | awk '{print $6}' | head -n 1)
```
Output:
```
Hello from pod: nginx-deployment-68b9c94586-qkwjc
@@ -347,5 +306,3 @@ written to stdout
```
If you try to repeat the command once again you will see that requests are handled by different pods.
Next: [DNS in Kubernetes](./10-dns.md)

View File

@@ -1,46 +0,0 @@
# DNS in Kubernetes
As we saw in previous section, kubernetes has special component to solve service discovery issues. But we solved it only partially.
In this section we will figure out with the next part of the service discovery.
If you remember, in previous section we accessed service by using its IP address. But it solves the issue only partially, as we still need to know the service IP address. To solve second part of it - we will configure DNS server in kubernetes.
> In Kubernetes, DNS (Domain Name System) is a crucial component that enables service discovery and communication between various resources within a cluster. DNS allows you to refer to services, pods, and other Kubernetes objects by their domain names instead of IP addresses, making it easier to manage and communicate between them.
Befire we will configure it, we can check if we can access our service (created in previuos section) by its name.
```bash
kubectl exec busy-box -- wget -O - nginx-service.default.svc.cluster.local.
```
And nothing happen. The reason of this befaviour - pod can't resolve IP address of the domain name requested as DNS server is not configured in our cluster.
Also, would like to mention, that kubernetes automatically configure DNS system in pod to use "special" DNS server configured for our cluster, this DNS server was configured using during setting up kubelet
```
...
clusterDNS:
- "10.32.0.10"
...
```
We will configure DNS server with the usage of the coredns, and will install it using out kubernetes cluster
```bash
kubectl apply -f https://raw.githubusercontent.com/ruslansavchuk/kubernetes-the-hard-way/master/manifests/coredns.yml -n kube-system
```
After our DNS server is up and running, we can try to repeat the call once again
```bash
kubectl exec busy-box -- wget -O - nginx-service.default.svc.cluster.local.
```
Output:
```
Hello from pod: nginx-deployment-68b9c94586-zh9vn
Connecting to nginx-service (10.32.0.230:80)
writing to stdout
- 100% |********************************| 50 0:00:00 ETA
written to stdout
```
As you can see everything works as expected.