kubernetes-the-hard-way/docs/09-kubeproxy.md

309 lines
8.0 KiB
Markdown
Raw Normal View History

2023-05-22 23:39:31 +03:00
# Kube-proxy
2023-05-25 23:50:52 +03:00
In this section we will configure kupe-proxy.
> kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
> kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
2023-05-25 23:50:52 +03:00
![image](./img/09_cluster_architecture_proxy.png "Kubelet")
2023-05-25 23:50:52 +03:00
Before we will start, lets clarify the reason why do we need it. To do that, we will create deployment with nginx.
```bash
2023-05-22 23:39:31 +03:00
{
2025-04-04 00:07:22 +03:00
cat <<EOF | tee nginx-deployment.yml
2023-05-22 23:39:31 +03:00
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
default.conf: |
server {
listen 80;
server_name _;
location / {
return 200 "Hello from pod: \$hostname\n";
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21.3
ports:
- containerPort: 80
2023-05-22 23:39:31 +03:00
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/conf.d
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
EOF
kubectl apply -f nginx-deployment.yml
2023-05-22 23:39:31 +03:00
}
```
```bash
kubectl get pod -o wide
```
2023-05-22 23:39:31 +03:00
Output:
```
2023-05-22 23:39:31 +03:00
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-db9778f94-2zv7x 1/1 Running 0 63s 10.240.1.12 example-server <none> <none>
nginx-deployment-db9778f94-q5jx4 1/1 Running 0 63s 10.240.1.10 example-server <none> <none>
nginx-deployment-db9778f94-twx78 1/1 Running 0 63s 10.240.1.11 example-server <none> <none>
```
2023-05-25 23:50:52 +03:00
As you an see, we created 3 pods (each has its own ip address). Now, we will run busybox container and will try to access our pods from other container
```bash
2023-05-22 23:39:31 +03:00
{
2025-04-04 00:07:22 +03:00
cat <<EOF | tee pod.yaml
2023-05-22 23:39:31 +03:00
apiVersion: v1
kind: Pod
metadata:
name: busy-box
spec:
containers:
- name: busy-box
image: busybox
command: ['sh', '-c', 'while true; do echo "Busy"; sleep 1; done']
EOF
kubectl apply -f pod.yaml
}
```
2023-05-25 23:50:52 +03:00
And execute command from our container
2023-05-22 23:39:31 +03:00
```bash
kubectl exec busy-box -- wget -O - $(kubectl get pod -o wide | grep nginx | awk '{print $6}' | head -n 1)
```
Output:
```
Hello from pod: nginx-deployment-68b9c94586-qkwjc
Connecting to 10.32.0.230 (10.32.0.230:80)
writing to stdout
2023-05-22 23:39:31 +03:00
- 100% |********************************| 50 0:00:00 ETA
written to stdout
```
2023-06-08 23:25:56 +03:00
Note: usually, it takes some time to apply all RBAC policies
2023-05-25 23:50:52 +03:00
Note: it take some time to apply user permission. During this you can steel see permission error.
2023-05-25 23:50:52 +03:00
As you can see, we successfully received the response from the nginx. But to do that we used the IP address of the pod. To solve service discovery issue, kubernetes has special component - service. Now we will create it.
```bash
{
2025-04-04 00:07:22 +03:00
cat <<EOF | tee nginx-service.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
EOF
kubectl apply -f nginx-service.yml
}
```
2023-05-25 23:50:52 +03:00
Get service created
```bash
kubectl get service
```
2023-05-25 23:50:52 +03:00
Now, we will try to access our pods by using the IP of the service created.
```bash
2023-05-22 23:39:31 +03:00
kubectl exec busy-box -- wget -O - $(kubectl get service -o wide | grep nginx | awk '{print $3}')
```
2023-05-22 23:39:31 +03:00
Output:
```
Connecting to 10.32.0.230 (10.32.0.230:80)
```
2023-05-25 23:50:52 +03:00
As you can see, we received an error. This error occured because kubernetes know nothing about the IP creted for the service. As already mentioned, kube-proxy is the component responsible to handle requests to ip of the service and redirect that requests to the pods. So, lets configure kube-proxy.
2023-05-22 23:39:31 +03:00
2023-05-25 23:50:52 +03:00
## certificates
2023-05-22 23:39:31 +03:00
2023-05-25 23:50:52 +03:00
We will start with certificates.
2023-05-25 23:50:52 +03:00
As you remeber we configured our API server to use client certificate to authenticate user.
So, lets create proper certificate for the kube-proxy
```bash
{
2025-04-04 00:07:22 +03:00
cat <<EOF | tee kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
}
```
2023-05-25 23:50:52 +03:00
The most interesting configuration options:
- cn(common name) - value, api server will use as a client name during authorization
- o(organozation) - user group system:node-proxier will use during authorization
We specified "system:node-proxier" in the organization. It says api server that the client who uses which certificate belongs to the system:node-proxier group.
## configuration
2023-05-25 23:50:52 +03:00
After the certificate files created we can create configuration files for the kube proxy.
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
```
2023-05-25 23:50:52 +03:00
We created kubernetes configuration file, which says kube-proxy where api server is configured and which certificates to use communicating with it
Now, we can distribute created configuration file.
2023-05-22 23:39:31 +03:00
```bash
2025-04-04 00:07:22 +03:00
mkdir -p /var/lib/kube-proxy \
&& mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
```
2023-05-25 23:50:52 +03:00
After all required configuration file created, we need to download kube-proxy binaries.
2023-05-22 23:39:31 +03:00
```bash
2023-05-25 23:50:52 +03:00
wget -q --show-progress --https-only --timestamping \
2025-04-04 00:07:22 +03:00
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-proxy
```
2023-05-25 23:50:52 +03:00
And install it
```bash
2025-04-04 00:07:22 +03:00
chmod +x kube-proxy \
&& mv kube-proxy /usr/local/bin/
```
2023-05-25 23:50:52 +03:00
Now, we can create configuration file for kube-proxy
```bash
2025-04-04 00:07:22 +03:00
cat <<EOF | tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
```
2023-05-25 23:50:52 +03:00
Service configuration file
```bash
2025-04-04 00:07:22 +03:00
cat <<EOF | tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
2023-05-25 23:50:52 +03:00
Start service
```bash
2025-04-04 00:07:22 +03:00
systemctl daemon-reload \
&& systemctl enable kube-proxy \
&& systemctl start kube-proxy
```
2023-05-25 23:50:52 +03:00
And check its status
```bash
2025-04-04 00:07:22 +03:00
systemctl status kube-proxy
```
2023-05-22 23:39:31 +03:00
Output:
```
● kube-proxy.service - Kubernetes Kube Proxy
Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2023-04-20 13:37:27 UTC; 23s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 19873 (kube-proxy)
Tasks: 5 (limit: 2275)
Memory: 10.0M
CGroup: /system.slice/kube-proxy.service
└─19873 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/kube-proxy-config.yaml
...
```
2023-05-25 23:50:52 +03:00
## verification
2023-05-25 23:50:52 +03:00
After we configured kube-proxy, we can check how or service works once again.
```bash
2023-05-22 23:39:31 +03:00
kubectl exec busy-box -- wget -O - $(kubectl get service -o wide | grep nginx | awk '{print $3}')
```
2023-05-25 23:50:52 +03:00
Output:
2023-05-22 23:39:31 +03:00
```
Hello from pod: nginx-deployment-68b9c94586-qkwjc
Connecting to 10.32.0.230 (10.32.0.230:80)
writing to stdout
2023-05-22 23:39:31 +03:00
- 100% |********************************| 50 0:00:00 ETA
written to stdout
```
2023-05-22 23:39:31 +03:00
2023-05-25 23:50:52 +03:00
If you try to repeat the command once again you will see that requests are handled by different pods.