2023-04-24 23:52:17 +03:00
# Controller manager
2023-05-25 23:50:52 +03:00
In this section we will configure controller-manager.
2023-04-24 23:52:17 +03:00
2023-05-22 23:39:31 +03:00

2023-04-24 23:52:17 +03:00
2023-05-22 23:39:31 +03:00
>Controller Manager is a core component responsible for managing various controllers that regulate the desired state of the cluster. It runs as a separate process on the Kubernetes control plane and includes several built-in controllers
2023-04-24 23:52:17 +03:00
2023-05-22 23:39:31 +03:00
To see controller manager in action, we will create deployment before controller manager configured.
2023-04-24 23:52:17 +03:00
```bash
2023-05-22 23:39:31 +03:00
{
2023-04-24 23:52:17 +03:00
cat < < EOF > deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 1
selector:
matchLabels:
app: hello-world-deployment
template:
metadata:
labels:
app: hello-world-deployment
spec:
serviceAccountName: hello-world
containers:
- name: hello-world-container
image: busybox
command: ['sh', '-c', 'while true; do echo "Hello, World from deployment!"; sleep 1; done']
EOF
kubectl apply -f deployment.yaml
2023-05-22 23:39:31 +03:00
}
2023-04-24 23:52:17 +03:00
```
2023-05-22 23:39:31 +03:00
Check created deployment status:
2023-04-24 23:52:17 +03:00
```bash
kubectl get deploy
```
2023-05-22 23:39:31 +03:00
Output:
2023-04-24 23:52:17 +03:00
```
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 0/1 0 0 24s
```
2023-05-22 23:39:31 +03:00
As we can se our deployment isn't in ready state.
As we already mentioned, in kubernetes controller manager is responsible to ensure that desired state of the cluster equals to the actual state. In our case it means that deployment controller should create replicaset and replicaset controller should create pod which will be assigned to nodes by scheduler. But as controller manager is not configured - nothing happen with created deployment.
2023-04-24 23:52:17 +03:00
2023-05-22 23:39:31 +03:00
So, lets configure controller manager.
2023-04-24 23:52:17 +03:00
2023-05-22 23:39:31 +03:00
## certificates
We will start with certificates.
2023-04-24 23:52:17 +03:00
2023-05-22 23:39:31 +03:00
As you remeber we configured our API server cto use client certificate to authenticate user.
So, lets create proper certificate for the controller manager
2023-04-24 23:52:17 +03:00
```bash
{
cat > kube-controller-manager-csr.json < < EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
```
2023-05-22 23:39:31 +03:00
Created certs:
```
kube-controller-manager.csr
kube-controller-manager-key.pem
kube-controller-manager.pem
```
The most interesting configuration options:
- cn(common name) - value, api server will use as a client name during authorization
- o(organozation) - user group controller manager will use during authorization
We specified "system:kube-controller-manager" in the organization. It says api server that the client who uses which certificate belongs to the system:kube-controller-manager group.
Now, we will distribute ca certificate, this ????
```bash
2025-04-04 00:07:22 +03:00
cp ca-key.pem /var/lib/kubernetes/
2023-05-22 23:39:31 +03:00
```
## configuration
After the certificate files created we can create configuration files for the controller manager.
2023-04-24 23:52:17 +03:00
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
```
2023-05-22 23:39:31 +03:00
We created kubernetes configuration file, which says controller manager where api server is configured and which certificates to use communicating with it
Now, we can distribute created configuration file.
```bash
2025-04-04 00:07:22 +03:00
mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
2023-05-22 23:39:31 +03:00
```
After all required configuration file created, we need to download controller manager binaries.
2023-04-24 23:52:17 +03:00
```bash
wget -q --show-progress --https-only --timestamping \
2025-04-04 00:07:22 +03:00
"https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-controller-manager"
2023-04-24 23:52:17 +03:00
```
2023-05-22 23:39:31 +03:00
And install it
2023-04-24 23:52:17 +03:00
```bash
2025-04-04 00:07:22 +03:00
chmod +x kube-controller-manager \
& & mv kube-controller-manager /usr/local/bin/
2023-04-24 23:52:17 +03:00
```
2023-05-22 23:39:31 +03:00
Now, we can create configuration file for controller manager service
2023-04-24 23:52:17 +03:00
```bash
2025-04-04 00:07:22 +03:00
cat < < EOF | tee / etc / systemd / system / kube-controller-manager . service
2023-04-24 23:52:17 +03:00
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--bind-address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
2023-05-22 23:39:31 +03:00
After configuration file created, we can start controller manager
2023-04-24 23:52:17 +03:00
```bash
2025-04-04 00:07:22 +03:00
systemctl daemon-reload \
& & systemctl enable kube-controller-manager \
& & systemctl start kube-controller-manager
2023-04-24 23:52:17 +03:00
```
2023-05-22 23:39:31 +03:00
And finaly we can check controller manadger status
2023-04-24 23:52:17 +03:00
```bash
2025-04-04 00:07:22 +03:00
systemctl status kube-controller-manager
2023-04-24 23:52:17 +03:00
```
2023-05-22 23:39:31 +03:00
Output:
2023-04-24 23:52:17 +03:00
```
● kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2023-04-20 11:48:41 UTC; 30s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 14805 (kube-controller)
Tasks: 6 (limit: 2275)
Memory: 32.0M
CGroup: /system.slice/kube-controller-manager.service
└─14805 /usr/local/bin/kube-controller-manager --bind-address=0.0.0.0 --cluster-cidr=10.200.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/var/lib/kubernetes/c>
...
```
2023-05-22 23:39:31 +03:00
As you can see our controller manager is up and running. So we can continue with our deployment.
2023-04-24 23:52:17 +03:00
```bash
2023-05-22 23:39:31 +03:00
kubectl get deploy
2023-04-24 23:52:17 +03:00
```
2023-05-22 23:39:31 +03:00
Output:
2023-04-24 23:52:17 +03:00
```
2023-05-22 23:39:31 +03:00
NAME READY UP-TO-DATE AVAILABLE AGE
deployment 1/1 1 1 2m8s
2023-04-24 23:52:17 +03:00
```
2023-05-22 23:39:31 +03:00
As you can see our deployment is up anr running, all desired pods are also in running state
2023-04-24 23:52:17 +03:00
```bash
2023-05-22 23:39:31 +03:00
kubectl get pods
2023-04-24 23:52:17 +03:00
```
2023-05-22 23:39:31 +03:00
Output:
2023-04-24 23:52:17 +03:00
```
2023-05-22 23:39:31 +03:00
NAME READY STATUS RESTARTS AGE
deployment-74fc7cdd68-89rqw 1/1 Running 0 67s
2023-04-24 23:52:17 +03:00
```
2023-05-22 23:39:31 +03:00
Now, when our controller manager configured, lets clean up our workspace.
```bash
kubectl delete -f deployment.yaml
```
2023-05-03 17:31:47 +03:00
2023-06-08 23:25:56 +03:00
Check if pod created by deployment deleted
```bash
kubectl get pod
```
Outpput:
```
No resources found in default namespace.
```
2023-05-22 23:39:31 +03:00
Next: [Kube-proxy ](./09-kubeproxy.md )