2019-03-20 07:34:49 +03:00
# TLS Bootstrapping Worker Nodes
In the previous step we configured a worker node by
- Creating a set of key pairs for the worker node by ourself
- Getting them signed by the CA by ourself
- Creating a kube-config file using this certificate by ourself
- Everytime the certificate expires we must follow the same process of updating the certificate by ourself
This is not a practical approach when you have 1000s of nodes in the cluster, and nodes dynamically being added and removed from the cluster. With TLS boostrapping:
- The Nodes can generate certificate key pairs by themselves
- The Nodes can generate certificate signing request by themselves
- The Nodes can submit the certificate signing request to the Kubernetes CA (Using the Certificates API)
- The Nodes can retrieve the signed certificate from the Kubernetes CA
- The Nodes can generate a kube-config file using this certificate by themselves
- The Nodes can start and join the cluster by themselves
2020-11-06 22:22:50 +03:00
- The Nodes can request new certificates via a CSR, but the CSR must be manually approved by a cluster administrator
In Kubernetes 1.11 a patch was merged to require administrator or Controller approval of node serving CSRs for security reasons.
2022-09-20 09:17:00 +03:00
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#certificate-rotation
2019-03-20 07:34:49 +03:00
So let's get started!
# What is required for TLS Bootstrapping
**Certificates API:** The Certificate API (as discussed in the lecture) provides a set of APIs on Kubernetes that can help us manage certificates (Create CSR, Get them signed by CA, Retrieve signed certificate etc). The worker nodes (kubelets) have the ability to use this API to get certificates signed by the Kubernetes CA.
# Pre-Requisite
**kube-apiserver** - Ensure bootstrap token based authentication is enabled on the kube-apiserver.
`--enable-bootstrap-token-auth=true`
**kube-controller-manager** - The certificate requests are signed by the kube-controller-manager ultimately. The kube-controller-manager requires the CA Certificate and Key to perform these operations.
```
--cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \\
--cluster-signing-key-file=/var/lib/kubernetes/ca.key
```
2022-09-20 09:17:00 +03:00
> Note: We have already configured these in lab 8 in this course
2019-03-20 07:34:49 +03:00
2022-09-20 09:17:00 +03:00
# Step 1 Create the Boostrap Token to be used by Nodes(Kubelets) to invoke Certificate API
2019-03-20 07:34:49 +03:00
2022-09-20 09:17:00 +03:00
[//]: # (host:master-1)
2019-03-20 07:34:49 +03:00
2022-09-20 09:17:00 +03:00
Run the following steps on `master-1`
2019-03-20 07:34:49 +03:00
For the workers(kubelet) to access the Certificates API, they need to authenticate to the kubernetes api-server first. For this we create a [Bootstrap Token ](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/ ) to be used by the kubelet
Bootstrap Tokens take the form of a 6 character token id followed by 16 character token secret separated by a dot. Eg: abcdef.0123456789abcdef. More formally, they must match the regular expression [a-z0-9]{6}\.[a-z0-9]{16}
2022-09-20 09:17:00 +03:00
Set an expiration date for the bootstrap token of 7 days from now (you can adjust this)
2021-04-19 00:39:09 +03:00
2022-09-20 09:17:00 +03:00
```bash
EXPIRATION=$(date -u --date "+7 days" +"%Y-%m-%dT%H:%M:%SZ")
2019-03-20 07:34:49 +03:00
```
2022-09-20 09:17:00 +03:00
```bash
cat > bootstrap-token-07401b.yaml < < EOF
2019-03-20 07:34:49 +03:00
apiVersion: v1
kind: Secret
metadata:
# Name MUST be of form "bootstrap-token-< token id > "
name: bootstrap-token-07401b
namespace: kube-system
# Type MUST be 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:
# Human readable description. Optional.
description: "The default bootstrap token generated by 'kubeadm init'."
# Token ID and secret. Required.
token-id: 07401b
token-secret: f395accd246ae52d
# Expiration. Optional.
2022-09-20 09:17:00 +03:00
expiration: ${EXPIRATION}
2019-03-20 07:34:49 +03:00
# Allowed usages.
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
# Extra groups to authenticate the token as. Must start with "system:bootstrappers:"
auth-extra-groups: system:bootstrappers:worker
EOF
2022-09-20 09:17:00 +03:00
kubectl create -f bootstrap-token-07401b.yaml --kubeconfig admin.kubeconfig
2019-03-20 07:34:49 +03:00
```
Things to note:
2022-09-20 09:17:00 +03:00
- **expiration** - make sure its set to a date in the future. The computed shell variable `EXPIRATION` ensures this.
2019-03-20 07:34:49 +03:00
- **auth-extra-groups** - this is the group the worker nodes are part of. It must start with "system:bootstrappers:" This group does not exist already. This group is associated with this token.
Once this is created the token to be used for authentication is `07401b.f395accd246ae52d`
2019-11-19 07:23:30 +03:00
Reference: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#bootstrap-token-secret-format
2019-03-20 07:34:49 +03:00
## Step 2 Authorize workers(kubelets) to create CSR
Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet
2022-09-20 09:17:00 +03:00
```bash
kubectl create clusterrolebinding create-csrs-for-bootstrapping \
--clusterrole=system:node-bootstrapper \
--group=system:bootstrappers \
--kubeconfig admin.kubeconfig
2019-03-20 07:34:49 +03:00
```
--------------- OR ---------------
2022-09-20 09:17:00 +03:00
```bash
cat > csrs-for-bootstrapping.yaml < < EOF
2019-03-20 07:34:49 +03:00
# enable bootstrapping nodes to create CSR
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: create-csrs-for-bootstrapping
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:node-bootstrapper
apiGroup: rbac.authorization.k8s.io
EOF
2022-09-20 09:17:00 +03:00
kubectl create -f csrs-for-bootstrapping.yaml --kubeconfig admin.kubeconfig
2019-03-20 07:34:49 +03:00
```
2022-09-20 09:17:00 +03:00
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#authorize-kubelet-to-create-csr
2019-03-20 07:34:49 +03:00
2022-09-20 09:17:00 +03:00
## Step 3 Authorize workers(kubelets) to approve CSRs
```bash
kubectl create clusterrolebinding auto-approve-csrs-for-group \
--clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient \
--group=system:bootstrappers \
--kubeconfig admin.kubeconfig
2019-03-20 07:34:49 +03:00
```
2022-09-20 09:17:00 +03:00
--------------- OR ---------------
2019-03-20 07:34:49 +03:00
2022-09-20 09:17:00 +03:00
```bash
cat > auto-approve-csrs-for-group.yaml < < EOF
2019-03-20 07:34:49 +03:00
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
EOF
2022-09-20 09:17:00 +03:00
kubectl create -f auto-approve-csrs-for-group.yaml --kubeconfig admin.kubeconfig
2019-03-20 07:34:49 +03:00
```
2022-09-20 09:17:00 +03:00
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#approval
2019-11-19 07:23:30 +03:00
2022-09-20 09:17:00 +03:00
## Step 4 Authorize workers(kubelets) to Auto Renew Certificates on expiration
2019-03-20 07:34:49 +03:00
We now create the Cluster Role Binding required for the nodes to automatically renew the certificates on expiry. Note that we are NOT using the **system:bootstrappers** group here any more. Since by the renewal period, we believe the node would be bootstrapped and part of the cluster already. All nodes are part of the **system:nodes** group.
2022-09-20 09:17:00 +03:00
```bash
kubectl create clusterrolebinding auto-approve-renewals-for-nodes \
--clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient \
--group=system:nodes \
--kubeconfig admin.kubeconfig
2019-03-20 07:34:49 +03:00
```
--------------- OR ---------------
2022-09-20 09:17:00 +03:00
```bash
cat > auto-approve-renewals-for-nodes.yaml < < EOF
2019-03-20 07:34:49 +03:00
# Approve renewal CSRs for the group "system:nodes"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-renewals-for-nodes
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
EOF
2022-09-20 09:17:00 +03:00
kubectl create -f auto-approve-renewals-for-nodes.yaml --kubeconfig admin.kubeconfig
```
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#approval
## Step 5 Configure the Binaries on the Worker node
Going forward all activities are to be done on the `worker-2` node until [step 11 ](#step-11-approve-server-csr ).
[//]: # (host:worker-2)
### Download and Install Worker Binaries
2023-11-23 22:52:14 +03:00
Note that kubectl is required here to assist with creating the boostrap kubeconfigs for kubelet and kube-proxy
2022-09-20 09:17:00 +03:00
```bash
2023-11-23 22:52:14 +03:00
KUBE_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
2022-09-20 09:17:00 +03:00
wget -q --show-progress --https-only --timestamping \
2023-11-23 22:52:14 +03:00
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-proxy \
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kubelet
2019-03-20 07:34:49 +03:00
```
2022-09-20 09:17:00 +03:00
Reference: https://kubernetes.io/releases/download/#binaries
2019-11-19 07:23:30 +03:00
2022-09-20 09:17:00 +03:00
Create the installation directories:
```bash
sudo mkdir -p \
/var/lib/kubelet/pki \
/var/lib/kube-proxy \
/var/lib/kubernetes/pki \
/var/run/kubernetes
```
Install the worker binaries:
```bash
{
2023-11-23 22:52:14 +03:00
chmod +x kube-proxy kubelet
sudo mv kube-proxy kubelet /usr/local/bin/
2022-09-20 09:17:00 +03:00
}
```
Move the certificates and secure them.
```bash
{
sudo mv ca.crt kube-proxy.crt kube-proxy.key /var/lib/kubernetes/pki
sudo chown root:root /var/lib/kubernetes/pki/*
sudo chmod 600 /var/lib/kubernetes/pki/*
}
```
## Step 6 Configure Kubelet to TLS Bootstrap
2019-03-20 07:34:49 +03:00
It is now time to configure the second worker to TLS bootstrap using the token we generated
For worker-1 we started by creating a kubeconfig file with the TLS certificates that we manually generated.
Here, we don't have the certificates yet. So we cannot create a kubeconfig file. Instead we create a bootstrap-kubeconfig file with information about the token we created.
2022-09-20 09:17:00 +03:00
This is to be done on the `worker-2` node. Note that now we have set up the load balancer to provide high availibilty across the API servers, we point kubelet to the load balancer.
2019-03-20 07:34:49 +03:00
2022-09-20 09:17:00 +03:00
Set up some shell variables for nodes and services we will require in the following configurations:
```bash
LOADBALANCER=$(dig +short loadbalancer)
POD_CIDR=10.244.0.0/16
SERVICE_CIDR=10.96.0.0/16
CLUSTER_DNS=$(echo $SERVICE_CIDR | awk 'BEGIN {FS="."} ; { printf("%s.%s.%s.10", $1, $2, $3) }')
2019-03-20 07:34:49 +03:00
```
2022-09-20 09:17:00 +03:00
Set up the bootstrap kubeconfig.
```bash
{
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
set-cluster bootstrap --server="https://${LOADBALANCER}:6443" --certificate-authority=/var/lib/kubernetes/pki/ca.crt
2019-03-20 07:34:49 +03:00
2022-09-20 09:17:00 +03:00
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
use-context bootstrap
}
2019-03-20 07:34:49 +03:00
```
2022-09-20 09:17:00 +03:00
--------------- OR ---------------
```bash
cat < < EOF | sudo tee / var / lib / kubelet / bootstrap-kubeconfig
2019-03-20 07:34:49 +03:00
apiVersion: v1
clusters:
- cluster:
2022-09-20 09:17:00 +03:00
certificate-authority: /var/lib/kubernetes/pki/ca.crt
server: https://${LOADBALANCER}:6443
2019-03-20 07:34:49 +03:00
name: bootstrap
contexts:
- context:
cluster: bootstrap
user: kubelet-bootstrap
name: bootstrap
current-context: bootstrap
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: 07401b.f395accd246ae52d
EOF
```
2022-09-20 09:17:00 +03:00
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#kubelet-configuration
2019-11-19 07:23:30 +03:00
2022-09-20 09:17:00 +03:00
## Step 7 Create Kubelet Config File
2019-03-20 07:34:49 +03:00
Create the `kubelet-config.yaml` configuration file:
2023-11-23 22:52:14 +03:00
Reference: https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/
2022-09-20 09:17:00 +03:00
```bash
cat < < EOF | sudo tee / var / lib / kubelet / kubelet-config . yaml
2019-03-20 07:34:49 +03:00
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
2022-09-20 09:17:00 +03:00
clientCAFile: /var/lib/kubernetes/pki/ca.crt
2019-03-20 07:34:49 +03:00
authorization:
mode: Webhook
2023-11-23 22:52:14 +03:00
containerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock
cgroupDriver: systemd
2019-03-20 07:34:49 +03:00
clusterDomain: "cluster.local"
clusterDNS:
2022-09-20 09:17:00 +03:00
- ${CLUSTER_DNS}
registerNode: true
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
2019-03-20 07:34:49 +03:00
runtimeRequestTimeout: "15m"
2022-09-20 09:17:00 +03:00
serverTLSBootstrap: true
2019-03-20 07:34:49 +03:00
EOF
```
> Note: We are not specifying the certificate details - tlsCertFile and tlsPrivateKeyFile - in this file
2022-09-20 09:17:00 +03:00
## Step 8 Configure Kubelet Service
2019-03-20 07:34:49 +03:00
Create the `kubelet.service` systemd unit file:
2022-09-20 09:17:00 +03:00
```bash
cat < < EOF | sudo tee / etc / systemd / system / kubelet . service
2019-03-20 07:34:49 +03:00
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
2022-09-20 09:17:00 +03:00
After=containerd.service
Requires=containerd.service
2019-03-20 07:34:49 +03:00
[Service]
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig="/var/lib/kubelet/bootstrap-kubeconfig" \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--cert-dir=/var/lib/kubelet/pki/ \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
Things to note here:
- **bootstrap-kubeconfig**: Location of the bootstrap-kubeconfig file.
- **cert-dir**: The directory where the generated certificates are stored.
2022-09-20 09:17:00 +03:00
- **kubeconfig**: We specify a location for this *but we have not yet created it* . Kubelet will create one itself upon successful bootstrap.
2019-03-20 07:34:49 +03:00
2022-09-20 09:17:00 +03:00
## Step 9 Configure the Kubernetes Proxy
2019-03-20 07:34:49 +03:00
2019-12-09 17:23:37 +03:00
In one of the previous steps we created the kube-proxy.kubeconfig file. Check [here ](https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md ) if you missed it.
2023-11-23 22:52:14 +03:00
2022-09-20 09:17:00 +03:00
```bash
{
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/
sudo chown root:root /var/lib/kube-proxy/kube-proxy.kubeconfig
sudo chmod 600 /var/lib/kube-proxy/kube-proxy.kubeconfig
}
2019-03-20 07:34:49 +03:00
```
Create the `kube-proxy-config.yaml` configuration file:
2023-11-23 22:52:14 +03:00
Reference: https://kubernetes.io/docs/reference/config-api/kube-proxy-config.v1alpha1/
2022-09-20 09:17:00 +03:00
```bash
cat < < EOF | sudo tee / var / lib / kube-proxy / kube-proxy-config . yaml
2019-03-20 07:34:49 +03:00
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
2022-09-20 09:17:00 +03:00
kubeconfig: /var/lib/kube-proxy/kube-proxy.kubeconfig
2023-11-23 22:52:14 +03:00
mode: ipvs
2022-09-20 09:17:00 +03:00
clusterCIDR: ${POD_CIDR}
2019-03-20 07:34:49 +03:00
EOF
```
Create the `kube-proxy.service` systemd unit file:
2022-09-20 09:17:00 +03:00
```bash
cat < < EOF | sudo tee / etc / systemd / system / kube-proxy . service
2019-03-20 07:34:49 +03:00
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
2022-09-20 09:17:00 +03:00
## Step 10 Start the Worker Services
2019-03-20 07:34:49 +03:00
2020-06-30 04:47:56 +03:00
On worker-2:
2022-09-20 09:17:00 +03:00
```bash
2019-03-20 07:34:49 +03:00
{
sudo systemctl daemon-reload
sudo systemctl enable kubelet kube-proxy
sudo systemctl start kubelet kube-proxy
}
```
> Remember to run the above commands on worker node: `worker-2`
2022-09-20 09:17:00 +03:00
### Optional - Check Certificates and kubeconfigs
At `worker-2` node, run the following, selecting option 5
2019-03-20 07:34:49 +03:00
2023-11-23 22:52:14 +03:00
[//]: # (command:sleep 5)
[//]: # (command:./cert_verify.sh 5)
```
2022-09-20 09:17:00 +03:00
./cert_verify.sh
```
## Step 11 Approve Server CSR
2019-03-20 07:34:49 +03:00
2022-09-20 09:17:00 +03:00
Now, go back to `master-1` and approve the pending kubelet-serving certificate
[//]: # (host:master-1)
2023-11-23 22:52:14 +03:00
[//]: # (command:sudo apt install -y jq)
[//]: # (command:kubectl certificate approve --kubeconfig admin.kubeconfig $(kubectl get csr --kubeconfig admin.kubeconfig -o json | jq -r '.items | .[] | select(.spec.username == "system:node:worker-2") | .metadata.name'))
2019-03-20 07:34:49 +03:00
2022-09-22 22:22:50 +03:00
```bash
2022-09-20 09:17:00 +03:00
kubectl get csr --kubeconfig admin.kubeconfig
2019-03-20 07:34:49 +03:00
```
2022-09-20 09:17:00 +03:00
> Output - Note the name will be different, but it will begin with `csr-`
2019-03-20 07:34:49 +03:00
2022-09-20 09:17:00 +03:00
```
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-7k8nh 85s kubernetes.io/kubelet-serving system:node:worker-2 < none > Pending
csr-n7z8p 98s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:07401b < none > Approved,Issued
```
Approve the pending certificate. Note that the certificate name `csr-7k8nh` will be different for you, and each time you run through.
```
2023-11-23 22:52:14 +03:00
kubectl certificate approve --kubeconfig admin.kubeconfig csr-7k8nh
2022-09-20 09:17:00 +03:00
```
2019-03-20 07:34:49 +03:00
2020-11-06 22:22:50 +03:00
Note: In the event your cluster persists for longer than 365 days, you will need to manually approve the replacement CSR.
2019-03-20 07:34:49 +03:00
2022-09-20 09:17:00 +03:00
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#kubectl-approval
2019-03-20 07:34:49 +03:00
## Verification
List the registered Kubernetes nodes from the master node:
2022-09-20 09:17:00 +03:00
```bash
kubectl get nodes --kubeconfig admin.kubeconfig
2019-03-20 07:34:49 +03:00
```
> output
```
2022-09-20 09:17:00 +03:00
NAME STATUS ROLES AGE VERSION
2023-11-23 22:52:14 +03:00
worker-1 NotReady < none > 93s v1.28.4
worker-2 NotReady < none > 93s v1.28.4
2019-03-20 07:34:49 +03:00
```
2019-03-20 13:12:49 +03:00
2022-09-20 09:17:00 +03:00
Prev: [Bootstrapping the Kubernetes Worker Nodes ](10-bootstrapping-kubernetes-workers.md )</ br >
Next: [Configuring Kubectl ](12-configuring-kubectl.md )