Refresh and add Apple Silicon (#338)

* Delete CKA stuff.
It's covered in CKA repo

* Rename nodes

* Cluster up again

* Update issue template

* Update README

* Begin rearranging docs

* Update links

* Initial mac instructions

* iterm2 image

* update ssh-copy-id to be cross platform

* remove vagrant specific

* Apple scripts WIP

* Add var for architecture

* order input files

* Apple build working!

* auto-locate docs

* install sshpass

* Set execute bit

* apple done!

* install sshpass

* edits

* Corrections

* kube version output

* Adjustments

* Adjustments
This commit is contained in:
Alistair Mackay
2024-03-18 05:16:56 +00:00
committed by GitHub
parent 645b296cb6
commit 65b5a11784
81 changed files with 1400 additions and 787 deletions

View File

@@ -6,7 +6,7 @@ In the previous step we configured a worker node by
- Creating a kube-config file using this certificate by ourself
- Everytime the certificate expires we must follow the same process of updating the certificate by ourself
This is not a practical approach when you have 1000s of nodes in the cluster, and nodes dynamically being added and removed from the cluster. With TLS boostrapping:
This is not a practical approach when you could have 1000s of nodes in the cluster, and nodes dynamically being added and removed from the cluster. With TLS boostrapping:
- The Nodes can generate certificate key pairs by themselves
- The Nodes can generate certificate signing request by themselves
@@ -41,11 +41,11 @@ So let's get started!
> Note: We have already configured these in lab 8 in this course
# Step 1 Create the Boostrap Token to be used by Nodes(Kubelets) to invoke Certificate API
# Step 1 Create the Boostrap Token to be used by Nodes (Kubelets) to invoke Certificate API
[//]: # (host:master-1)
[//]: # (host:controlplane01)
Run the following steps on `master-1`
Run the following steps on `controlplane01`
For the workers(kubelet) to access the Certificates API, they need to authenticate to the kubernetes api-server first. For this we create a [Bootstrap Token](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) to be used by the kubelet
@@ -100,7 +100,7 @@ Once this is created the token to be used for authentication is `07401b.f395accd
Reference: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#bootstrap-token-secret-format
## Step 2 Authorize workers(kubelets) to create CSR
## Step 2 Authorize nodes (kubelets) to create CSR
Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet
@@ -135,7 +135,7 @@ kubectl create -f csrs-for-bootstrapping.yaml --kubeconfig admin.kubeconfig
```
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#authorize-kubelet-to-create-csr
## Step 3 Authorize workers(kubelets) to approve CSRs
## Step 3 Authorize nodes (kubelets) to approve CSRs
```bash
kubectl create clusterrolebinding auto-approve-csrs-for-group \
@@ -168,7 +168,7 @@ kubectl create -f auto-approve-csrs-for-group.yaml --kubeconfig admin.kubeconfig
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#approval
## Step 4 Authorize workers(kubelets) to Auto Renew Certificates on expiration
## Step 4 Authorize nodes (kubelets) to Auto Renew Certificates on expiration
We now create the Cluster Role Binding required for the nodes to automatically renew the certificates on expiry. Note that we are NOT using the **system:bootstrappers** group here any more. Since by the renewal period, we believe the node would be bootstrapped and part of the cluster already. All nodes are part of the **system:nodes** group.
@@ -206,9 +206,9 @@ Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-b
## Step 5 Configure the Binaries on the Worker node
Going forward all activities are to be done on the `worker-2` node until [step 11](#step-11-approve-server-csr).
Going forward all activities are to be done on the `node02` node until [step 11](#step-11-approve-server-csr).
[//]: # (host:worker-2)
[//]: # (host:node02)
### Download and Install Worker Binaries
@@ -218,8 +218,8 @@ Note that kubectl is required here to assist with creating the boostrap kubeconf
KUBE_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
wget -q --show-progress --https-only --timestamping \
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-proxy \
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kubelet
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kube-proxy \
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kubelet
```
Reference: https://kubernetes.io/releases/download/#binaries
@@ -256,10 +256,10 @@ Move the certificates and secure them.
It is now time to configure the second worker to TLS bootstrap using the token we generated
For worker-1 we started by creating a kubeconfig file with the TLS certificates that we manually generated.
For `node01` we started by creating a kubeconfig file with the TLS certificates that we manually generated.
Here, we don't have the certificates yet. So we cannot create a kubeconfig file. Instead we create a bootstrap-kubeconfig file with information about the token we created.
This is to be done on the `worker-2` node. Note that now we have set up the load balancer to provide high availibilty across the API servers, we point kubelet to the load balancer.
This is to be done on the `node02` node. Note that now we have set up the load balancer to provide high availibilty across the API servers, we point kubelet to the load balancer.
Set up some shell variables for nodes and services we will require in the following configurations:
@@ -367,6 +367,7 @@ ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--cert-dir=/var/lib/kubelet/pki/ \\
--node-ip=${PRIMARY_IP} \\
--v=2
Restart=on-failure
RestartSec=5
@@ -404,7 +405,7 @@ kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: /var/lib/kube-proxy/kube-proxy.kubeconfig
mode: ipvs
mode: iptables
clusterCIDR: ${POD_CIDR}
EOF
```
@@ -431,7 +432,7 @@ EOF
## Step 10 Start the Worker Services
On worker-2:
On `node02`:
```bash
{
@@ -440,11 +441,11 @@ On worker-2:
sudo systemctl start kubelet kube-proxy
}
```
> Remember to run the above commands on worker node: `worker-2`
> Remember to run the above commands on worker node: `node02`
### Optional - Check Certificates and kubeconfigs
At `worker-2` node, run the following, selecting option 5
At `node02` node, run the following, selecting option 5
[//]: # (command:sleep 5)
[//]: # (command:./cert_verify.sh 5)
@@ -456,11 +457,11 @@ At `worker-2` node, run the following, selecting option 5
## Step 11 Approve Server CSR
Now, go back to `master-1` and approve the pending kubelet-serving certificate
Now, go back to `controlplane01` and approve the pending kubelet-serving certificate
[//]: # (host:master-1)
[//]: # (host:controlplane01)
[//]: # (command:sudo apt install -y jq)
[//]: # (command:kubectl certificate approve --kubeconfig admin.kubeconfig $(kubectl get csr --kubeconfig admin.kubeconfig -o json | jq -r '.items | .[] | select(.spec.username == "system:node:worker-2") | .metadata.name'))
[//]: # (command:kubectl certificate approve --kubeconfig admin.kubeconfig $(kubectl get csr --kubeconfig admin.kubeconfig -o json | jq -r '.items | .[] | select(.spec.username == "system:node:node02") | .metadata.name'))
```bash
kubectl get csr --kubeconfig admin.kubeconfig
@@ -470,7 +471,7 @@ kubectl get csr --kubeconfig admin.kubeconfig
```
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-7k8nh 85s kubernetes.io/kubelet-serving system:node:worker-2 <none> Pending
csr-7k8nh 85s kubernetes.io/kubelet-serving system:node:node02 <none> Pending
csr-n7z8p 98s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:07401b <none> Approved,Issued
```
@@ -487,19 +488,21 @@ Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-b
## Verification
List the registered Kubernetes nodes from the master node:
List the registered Kubernetes nodes from the controlplane node:
```bash
kubectl get nodes --kubeconfig admin.kubeconfig
```
> output
Output will be similar to
```
NAME STATUS ROLES AGE VERSION
worker-1 NotReady <none> 93s v1.28.4
worker-2 NotReady <none> 93s v1.28.4
node01 NotReady <none> 93s v1.28.4
node02 NotReady <none> 93s v1.28.4
```
Prev: [Bootstrapping the Kubernetes Worker Nodes](10-bootstrapping-kubernetes-workers.md)</br>
Next: [Configuring Kubectl](12-configuring-kubectl.md)
Nodes are still not yet ready. As previously mentioned, this is expected.
Next: [Configuring Kubectl](./12-configuring-kubectl.md)</br>
Prev: [Bootstrapping the Kubernetes Worker Nodes](./10-bootstrapping-kubernetes-workers.md)