Upgrade/1.24 (#291)

* Set up Vagrantfile
- Use Ubuntu 22.04
- Set required kernel parameters and tunables
- Optimise file for DRY by use of local functions
- No longer install Docker

* Update prerequisites

* Update compute resources

* Update client-tools

* Update cert authority

* Update kube config files

* Update sata encryption keys

* Update etcd

* Cert enhancements
- Use dig for host IPs
- Create front-proxy keys

* Update prereqs with lab defaults

* Minor update

* Dynamic kubelet reconfig removed in 1.24

* Update failed provisioning

* Update cert sujects. Use vars for IP addresses

* Use vars for IP addresses

* USe vars for IPs. Update unit file

* Unit updates for 1.24. Use vars for IPs

* 1.24 changes
- Update unit files
- Use vars for IPs
- Install containerd

* Use vars for IPs. Update outputs

* Remove CNI plugins - done earlier

* Update API versions

* Adjust VM RAM

* Update coredns version and api versions

* Update git ignore and attributes

* Note about deprecation warning

* Fix kubeconfig name

* Formatting changes + pin nginx version

* Update kubetest

* Update README

* Discuss why only 2 masters

* Note on changing service cidr range vs coredns

* Add RAM column to VM table

* Best practice - secure PKI

* Secure kubeconfig

* Add prev link

* Adding `Prev` links

* Squashed commit of the following:

commit 8fbd36069cbf7365f627e5ebf5a04e37cde085d9
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 20:06:10 2022 +0100

    Update dns-addon test

commit 5528e873ecbe3265155da48d24c24d696635af52
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 20:00:48 2022 +0100

    Fix get nodes

commit 0d88ab0d1c4b6a7ae05bc2552366460f741bb763
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 20:00:19 2022 +0100

    Fix env var name

commit e564db03ff9c4c9ef536bcc5cd999fa1e6a3de15
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 19:42:52 2022 +0100

    Update e2e-tests

commit 247a59f2c5b84e34972f396cf87a34bcbeb2d2ef
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 19:39:54 2022 +0100

    Updated e2e-tests

commit 60b33d025bb252570f41c13f90955ec8d59141a7
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 19:38:02 2022 +0100

    bashify commands in ```

commit 2814949d6dd569c59ea7ec61135784d51ad4de1f
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 19:35:32 2022 +0100

    Note deprecation warning when deploying weave

commit af0264e13e5f0e277f8f31e5115a813680aadd74
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 19:33:55 2022 +0100

    Nodes are ready at end of step 11

commit 050502386d36a8593ed7348e902cdff9ad9c64b2
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 19:30:00 2022 +0100

    Minor change CNI

commit 04bdc1483e9696ed018ac26b6480237ee1dcf1d1
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 19:21:22 2022 +0100

    Explain data at rest is in etcd

commit 243154b9866f5a7a1a49037f97e38c6bf7ffbcb7
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 19:18:49 2022 +0100

    Explanation of api cluster ip

commit dd168ac2e128cbd405248115d8724498fa18fa67
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 19:14:42 2022 +0100

    Include vagrant password

commit d51c65a77ac192e2468d92f0067958c69057a2e0
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 19:12:34 2022 +0100

    Update tmux message

commit 10f41737100ab410adb6b20712ee32cd80618e3d
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 19:09:23 2022 +0100

    Insert step to configure CNI on both workers
    Optionally with tmux

commit 8fd873f1492f6ea1c846b3309f57740e8501adee
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 18:42:27 2022 +0100

    Shuffle up to make room for common cni install

commit d650443b069a7543cbb4cf449818a81d84932007
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 07:34:59 2022 +0100

    Added warning output to componentstatuses

commit 7bfef8f16bd1a126dcf3e5f43a02d79517d64c74
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 07:34:38 2022 +0100

    Rearrange text

commit b16b92bc6513cf355a41afa22ddfe2696142c28b
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 07:34:18 2022 +0100

    Minor wording change
    DNS arress is conventionally .10

commit 96c9d25663ce3d721e670262bb6858e9a7183873
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 07:32:24 2022 +0100

    Use shell vars for etcd addresses

commit c9e223fba5324a1c65d6f583cf9e739b8459df5d
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 07:31:58 2022 +0100

    Update on network defaults

commit 1cf98649df9410b8a7d14c68bcb17c24aa6a210a
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 07:05:38 2022 +0100

    Get and install correct CNI components

commit 311905fba72f4a48cde4a73c589daea9b76042b7
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Thu Aug 25 06:18:55 2022 +0100

    Update Approve CSR

commit 4c39c84c172fde8ab2aafc4ea38b050eb7f3019b
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Wed Aug 24 20:34:53 2022 +0100

    Moving certs out of service kuebeconfigs

* Squashed commit of the following:

commit 252cc335739e3c8007ab86c951222aba954d80f7
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Sun Aug 28 20:29:23 2022 +0100

    Update external links

commit 8091d1a13bc5a29654db2b8fecd55b8180bf8cab
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Sun Aug 28 20:28:14 2022 +0100

    Mac M1 note

commit 8b7e6065ffb74532b6ad7570a8c978addcc7fb66
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Sun Aug 28 20:03:11 2022 +0100

    Tweak order of commands e2e tests

commit 857d039dd1dff28e92d392ad6c5e40814a9eb054
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Sun Aug 28 20:02:51 2022 +0100

    Fixing kubecomfig checks

commit 26f42049bebd2d539406e6e16c51bb06441702f1
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Sun Aug 28 15:51:13 2022 +0100

    Updated cert_verify

commit 0df54e4c3499e6d79b836e1dfcf74eb9fdf196b1
Author: Alistair Mackay <34012094+fireflycons@users.noreply.github.com>
Date:   Sun Aug 28 09:09:14 2022 +0100

    Rewite cert_verify
    Round 1 certs and kubeconfigs

* Update README
- Insert CNI lab
- Correct CNI versions

* Automate hostfile network settings
Determine from interface address passed in.

* Update 01-prerequisites.md

* Update 01-prerequisites.md

Correct the default vm ip range

* Review updates. Issue 1

* Review updates. Issue 2

* Review updates. Issue 3
In actual fact, the base script is cert_verfiy.sh so the error is in the
link created by the provisioner. You'll see that the later labs all
refer to it with underscore.

* Review updates. Issue 5

* Review updates. Issue 6

* Review updates. Issue 7
I whip through the scripts so fast, that even if I had copied it twice
to my quick script, I didn't notice it say that the resource exists and
is unchanged!

* These certs already copied in step 4

* Formatting and command grouping

* Review updates. Step 11 cert_verify
Needs to be done after kublet starts as it is looking
for the auto-issued cert

* Group coomand batches

* Remove duplicate clusterrolebinding

* Extraction of scripts from md using tool
This uses markdown comments and ```bash fence
to determine what to extract and for which hosts

Fixed shell var bug in step 11

* Fixed typos

* Be specific that we're doing shutdown, not suspend

* Minor edits for clarity

* remove the extra \

* Rename step 9 to CRI, as that's what it actually is

* Disambiguate CRI vs CNI

* small fixes

Co-authored-by: Tej Singh Rana <58101587+Tej-Singh-Rana@users.noreply.github.com>
This commit is contained in:
Alistair Mackay
2022-09-20 07:17:00 +01:00
committed by GitHub
parent 6327752d82
commit dcddd3347f
36 changed files with 1666 additions and 1270 deletions

View File

@@ -0,0 +1,492 @@
# TLS Bootstrapping Worker Nodes
In the previous step we configured a worker node by
- Creating a set of key pairs for the worker node by ourself
- Getting them signed by the CA by ourself
- Creating a kube-config file using this certificate by ourself
- Everytime the certificate expires we must follow the same process of updating the certificate by ourself
This is not a practical approach when you have 1000s of nodes in the cluster, and nodes dynamically being added and removed from the cluster. With TLS boostrapping:
- The Nodes can generate certificate key pairs by themselves
- The Nodes can generate certificate signing request by themselves
- The Nodes can submit the certificate signing request to the Kubernetes CA (Using the Certificates API)
- The Nodes can retrieve the signed certificate from the Kubernetes CA
- The Nodes can generate a kube-config file using this certificate by themselves
- The Nodes can start and join the cluster by themselves
- The Nodes can request new certificates via a CSR, but the CSR must be manually approved by a cluster administrator
In Kubernetes 1.11 a patch was merged to require administrator or Controller approval of node serving CSRs for security reasons.
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#certificate-rotation
So let's get started!
# What is required for TLS Bootstrapping
**Certificates API:** The Certificate API (as discussed in the lecture) provides a set of APIs on Kubernetes that can help us manage certificates (Create CSR, Get them signed by CA, Retrieve signed certificate etc). The worker nodes (kubelets) have the ability to use this API to get certificates signed by the Kubernetes CA.
# Pre-Requisite
**kube-apiserver** - Ensure bootstrap token based authentication is enabled on the kube-apiserver.
`--enable-bootstrap-token-auth=true`
**kube-controller-manager** - The certificate requests are signed by the kube-controller-manager ultimately. The kube-controller-manager requires the CA Certificate and Key to perform these operations.
```
--cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \\
--cluster-signing-key-file=/var/lib/kubernetes/ca.key
```
> Note: We have already configured these in lab 8 in this course
# Step 1 Create the Boostrap Token to be used by Nodes(Kubelets) to invoke Certificate API
[//]: # (host:master-1)
Run the following steps on `master-1`
For the workers(kubelet) to access the Certificates API, they need to authenticate to the kubernetes api-server first. For this we create a [Bootstrap Token](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) to be used by the kubelet
Bootstrap Tokens take the form of a 6 character token id followed by 16 character token secret separated by a dot. Eg: abcdef.0123456789abcdef. More formally, they must match the regular expression [a-z0-9]{6}\.[a-z0-9]{16}
Set an expiration date for the bootstrap token of 7 days from now (you can adjust this)
```bash
EXPIRATION=$(date -u --date "+7 days" +"%Y-%m-%dT%H:%M:%SZ")
```
```bash
cat > bootstrap-token-07401b.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
# Name MUST be of form "bootstrap-token-<token id>"
name: bootstrap-token-07401b
namespace: kube-system
# Type MUST be 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:
# Human readable description. Optional.
description: "The default bootstrap token generated by 'kubeadm init'."
# Token ID and secret. Required.
token-id: 07401b
token-secret: f395accd246ae52d
# Expiration. Optional.
expiration: ${EXPIRATION}
# Allowed usages.
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
# Extra groups to authenticate the token as. Must start with "system:bootstrappers:"
auth-extra-groups: system:bootstrappers:worker
EOF
kubectl create -f bootstrap-token-07401b.yaml --kubeconfig admin.kubeconfig
```
Things to note:
- **expiration** - make sure its set to a date in the future. The computed shell variable `EXPIRATION` ensures this.
- **auth-extra-groups** - this is the group the worker nodes are part of. It must start with "system:bootstrappers:" This group does not exist already. This group is associated with this token.
Once this is created the token to be used for authentication is `07401b.f395accd246ae52d`
Reference: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#bootstrap-token-secret-format
## Step 2 Authorize workers(kubelets) to create CSR
Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet
```bash
kubectl create clusterrolebinding create-csrs-for-bootstrapping \
--clusterrole=system:node-bootstrapper \
--group=system:bootstrappers \
--kubeconfig admin.kubeconfig
```
--------------- OR ---------------
```bash
cat > csrs-for-bootstrapping.yaml <<EOF
# enable bootstrapping nodes to create CSR
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: create-csrs-for-bootstrapping
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:node-bootstrapper
apiGroup: rbac.authorization.k8s.io
EOF
kubectl create -f csrs-for-bootstrapping.yaml --kubeconfig admin.kubeconfig
```
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#authorize-kubelet-to-create-csr
## Step 3 Authorize workers(kubelets) to approve CSRs
```bash
kubectl create clusterrolebinding auto-approve-csrs-for-group \
--clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient \
--group=system:bootstrappers \
--kubeconfig admin.kubeconfig
```
--------------- OR ---------------
```bash
cat > auto-approve-csrs-for-group.yaml <<EOF
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
EOF
kubectl create -f auto-approve-csrs-for-group.yaml --kubeconfig admin.kubeconfig
```
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#approval
## Step 4 Authorize workers(kubelets) to Auto Renew Certificates on expiration
We now create the Cluster Role Binding required for the nodes to automatically renew the certificates on expiry. Note that we are NOT using the **system:bootstrappers** group here any more. Since by the renewal period, we believe the node would be bootstrapped and part of the cluster already. All nodes are part of the **system:nodes** group.
```bash
kubectl create clusterrolebinding auto-approve-renewals-for-nodes \
--clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient \
--group=system:nodes \
--kubeconfig admin.kubeconfig
```
--------------- OR ---------------
```bash
cat > auto-approve-renewals-for-nodes.yaml <<EOF
# Approve renewal CSRs for the group "system:nodes"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-renewals-for-nodes
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
EOF
kubectl create -f auto-approve-renewals-for-nodes.yaml --kubeconfig admin.kubeconfig
```
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#approval
## Step 5 Configure the Binaries on the Worker node
Going forward all activities are to be done on the `worker-2` node until [step 11](#step-11-approve-server-csr).
[//]: # (host:worker-2)
### Download and Install Worker Binaries
```bash
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.24.3/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.24.3/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.24.3/bin/linux/amd64/kubelet
```
Reference: https://kubernetes.io/releases/download/#binaries
Create the installation directories:
```bash
sudo mkdir -p \
/var/lib/kubelet/pki \
/var/lib/kube-proxy \
/var/lib/kubernetes/pki \
/var/run/kubernetes
```
Install the worker binaries:
```bash
{
chmod +x kubectl kube-proxy kubelet
sudo mv kubectl kube-proxy kubelet /usr/local/bin/
}
```
Move the certificates and secure them.
```bash
{
sudo mv ca.crt kube-proxy.crt kube-proxy.key /var/lib/kubernetes/pki
sudo chown root:root /var/lib/kubernetes/pki/*
sudo chmod 600 /var/lib/kubernetes/pki/*
}
```
## Step 6 Configure Kubelet to TLS Bootstrap
It is now time to configure the second worker to TLS bootstrap using the token we generated
For worker-1 we started by creating a kubeconfig file with the TLS certificates that we manually generated.
Here, we don't have the certificates yet. So we cannot create a kubeconfig file. Instead we create a bootstrap-kubeconfig file with information about the token we created.
This is to be done on the `worker-2` node. Note that now we have set up the load balancer to provide high availibilty across the API servers, we point kubelet to the load balancer.
Set up some shell variables for nodes and services we will require in the following configurations:
```bash
LOADBALANCER=$(dig +short loadbalancer)
POD_CIDR=10.244.0.0/16
SERVICE_CIDR=10.96.0.0/16
CLUSTER_DNS=$(echo $SERVICE_CIDR | awk 'BEGIN {FS="."} ; { printf("%s.%s.%s.10", $1, $2, $3) }')
```
Set up the bootstrap kubeconfig.
```bash
{
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
set-cluster bootstrap --server="https://${LOADBALANCER}:6443" --certificate-authority=/var/lib/kubernetes/pki/ca.crt
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
use-context bootstrap
}
```
--------------- OR ---------------
```bash
cat <<EOF | sudo tee /var/lib/kubelet/bootstrap-kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority: /var/lib/kubernetes/pki/ca.crt
server: https://${LOADBALANCER}:6443
name: bootstrap
contexts:
- context:
cluster: bootstrap
user: kubelet-bootstrap
name: bootstrap
current-context: bootstrap
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: 07401b.f395accd246ae52d
EOF
```
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#kubelet-configuration
## Step 7 Create Kubelet Config File
Create the `kubelet-config.yaml` configuration file:
```bash
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: /var/lib/kubernetes/pki/ca.crt
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- ${CLUSTER_DNS}
registerNode: true
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: "15m"
serverTLSBootstrap: true
EOF
```
> Note: We are not specifying the certificate details - tlsCertFile and tlsPrivateKeyFile - in this file
## Step 8 Configure Kubelet Service
Create the `kubelet.service` systemd unit file:
```bash
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig="/var/lib/kubelet/bootstrap-kubeconfig" \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--cert-dir=/var/lib/kubelet/pki/ \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
Things to note here:
- **bootstrap-kubeconfig**: Location of the bootstrap-kubeconfig file.
- **cert-dir**: The directory where the generated certificates are stored.
- **kubeconfig**: We specify a location for this *but we have not yet created it*. Kubelet will create one itself upon successful bootstrap.
## Step 9 Configure the Kubernetes Proxy
In one of the previous steps we created the kube-proxy.kubeconfig file. Check [here](https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md) if you missed it.
```bash
{
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/
sudo chown root:root /var/lib/kube-proxy/kube-proxy.kubeconfig
sudo chmod 600 /var/lib/kube-proxy/kube-proxy.kubeconfig
}
```
Create the `kube-proxy-config.yaml` configuration file:
```bash
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: /var/lib/kube-proxy/kube-proxy.kubeconfig
mode: iptables
clusterCIDR: ${POD_CIDR}
EOF
```
Create the `kube-proxy.service` systemd unit file:
```bash
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
## Step 10 Start the Worker Services
On worker-2:
```bash
{
sudo systemctl daemon-reload
sudo systemctl enable kubelet kube-proxy
sudo systemctl start kubelet kube-proxy
}
```
> Remember to run the above commands on worker node: `worker-2`
### Optional - Check Certificates and kubeconfigs
At `worker-2` node, run the following, selecting option 5
```bash
./cert_verify.sh
```
## Step 11 Approve Server CSR
Now, go back to `master-1` and approve the pending kubelet-serving certificate
[//]: # (host:master-1)
[//]: # (comment:Please now manually approve the certificate before proceeding)
```
kubectl get csr --kubeconfig admin.kubeconfig
```
> Output - Note the name will be different, but it will begin with `csr-`
```
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-7k8nh 85s kubernetes.io/kubelet-serving system:node:worker-2 <none> Pending
csr-n7z8p 98s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:07401b <none> Approved,Issued
```
Approve the pending certificate. Note that the certificate name `csr-7k8nh` will be different for you, and each time you run through.
```
kubectl certificate approve csr-7k8nh --kubeconfig admin.kubeconfig
```
Note: In the event your cluster persists for longer than 365 days, you will need to manually approve the replacement CSR.
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#kubectl-approval
## Verification
List the registered Kubernetes nodes from the master node:
```bash
kubectl get nodes --kubeconfig admin.kubeconfig
```
> output
```
NAME STATUS ROLES AGE VERSION
worker-1 NotReady <none> 93s v1.24.3
worker-2 NotReady <none> 93s v1.24.3
```
Prev: [Bootstrapping the Kubernetes Worker Nodes](10-bootstrapping-kubernetes-workers.md)</br>
Next: [Configuring Kubectl](12-configuring-kubectl.md)