Updating software components to latest stable releases. Fix missing config, minor spelling/grammar/flow fixes.

The main purpose of this update is to make sure the guide still works with the newest version of all software. In running through the guide I found places to make bug fixes and minor improvements.
master
Elson Rodriguez 2024-11-14 15:43:51 -08:00 committed by Kelsey Hightower
parent a9cb5f7ba5
commit 5a325c23d7
15 changed files with 100 additions and 90 deletions

4
.gitignore vendored
View File

@ -8,7 +8,7 @@ ca-csr.json
ca-key.pem
ca.csr
ca.pem
encryption-config.yaml
/encryption-config.yaml
kube-controller-manager-csr.json
kube-controller-manager-key.pem
kube-controller-manager.csr
@ -48,4 +48,4 @@ service-account.csr
service-account.pem
service-account-csr.json
*.swp
.idea/
.idea/

View File

@ -19,9 +19,9 @@ Kubernetes The Hard Way guides you through bootstrapping a basic Kubernetes clus
Component versions:
* [kubernetes](https://github.com/kubernetes/kubernetes) v1.28.x
* [containerd](https://github.com/containerd/containerd) v1.7.x
* [cni](https://github.com/containernetworking/cni) v1.3.x
* [kubernetes](https://github.com/kubernetes/kubernetes) v1.31.x
* [containerd](https://github.com/containerd/containerd) v2.0.x
* [cni](https://github.com/containernetworking/cni) v1.6.x
* [etcd](https://github.com/etcd-io/etcd) v3.4.x
## Labs

View File

@ -0,0 +1,11 @@
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}

View File

@ -4,7 +4,7 @@ In this lab you will review the machine requirements necessary to follow this tu
## Virtual or Physical Machines
This tutorial requires four (4) virtual or physical ARM64 machines running Debian 12 (bookworm). The follow table list the four machines and thier CPU, memory, and storage requirements.
This tutorial requires four (4) virtual or physical ARM64 machines running Debian 12 (bookworm). The following table list the four machines and thier CPU, memory, and storage requirements.
| Name | Description | CPU | RAM | Storage |
|---------|------------------------|-----|-------|---------|
@ -22,7 +22,7 @@ uname -mov
After running the `uname` command you should see the following output:
```text
#1 SMP Debian 6.1.55-1 (2023-09-29) aarch64 GNU/Linux
#1 SMP Debian 6.1.115-1 (2024-11-01) aarch64 GNU/Linux
```
You maybe surprised to see `aarch64` here, but that is the official name for the Arm Architecture 64-bit instruction set. You will often see `arm64` used by Apple, and the maintainers of the Linux kernel, when referring to support for `aarch64`. This tutorial will use `arm64` consistently throughout to avoid confusion.

View File

@ -49,19 +49,13 @@ pwd
In this section you will download the binaries for the various Kubernetes components. The binaries will be stored in the `downloads` directory on the `jumpbox`, which will reduce the amount of internet bandwidth required to complete this tutorial as we avoid downloading the binaries multiple times for each machine in our Kubernetes cluster.
From the `kubernetes-the-hard-way` directory create a `downloads` directory using the `mkdir` command:
```bash
mkdir downloads
```
The binaries that will be downloaded are listed in the `downloads.txt` file, which you can review using the `cat` command:
```bash
cat downloads.txt
```
Download the binaries listed in the `downloads.txt` file using the `wget` command:
Download the binaries listed in the `downloads.txt` file into a directory called `downloads` using the `wget` command:
```bash
wget -q --show-progress \
@ -78,18 +72,18 @@ ls -loh downloads
```
```text
total 584M
-rw-r--r-- 1 root 41M May 9 13:35 cni-plugins-linux-arm64-v1.3.0.tgz
-rw-r--r-- 1 root 34M Oct 26 15:21 containerd-1.7.8-linux-arm64.tar.gz
-rw-r--r-- 1 root 22M Aug 14 00:19 crictl-v1.28.0-linux-arm.tar.gz
-rw-r--r-- 1 root 15M Jul 11 02:30 etcd-v3.4.27-linux-arm64.tar.gz
-rw-r--r-- 1 root 111M Oct 18 07:34 kube-apiserver
-rw-r--r-- 1 root 107M Oct 18 07:34 kube-controller-manager
-rw-r--r-- 1 root 51M Oct 18 07:34 kube-proxy
-rw-r--r-- 1 root 52M Oct 18 07:34 kube-scheduler
-rw-r--r-- 1 root 46M Oct 18 07:34 kubectl
-rw-r--r-- 1 root 101M Oct 18 07:34 kubelet
-rw-r--r-- 1 root 9.6M Aug 10 18:57 runc.arm64
total 510M
-rw-r--r-- 1 root 48M Oct 15 02:37 cni-plugins-linux-arm64-v1.6.0.tgz
-rw-r--r-- 1 root 32M Nov 5 11:37 containerd-2.0.0-linux-arm64.tar.gz
-rw-r--r-- 1 root 17M Aug 13 03:48 crictl-v1.31.1-linux-arm64.tar.gz
-rw-r--r-- 1 root 16M Sep 11 11:28 etcd-v3.4.34-linux-arm64.tar.gz
-rw-r--r-- 1 root 84M Oct 22 21:41 kube-apiserver
-rw-r--r-- 1 root 79M Oct 22 21:41 kube-controller-manager
-rw-r--r-- 1 root 53M Oct 22 21:41 kubectl
-rw-r--r-- 1 root 72M Oct 22 21:41 kubelet
-rw-r--r-- 1 root 61M Oct 22 21:41 kube-proxy
-rw-r--r-- 1 root 60M Oct 22 21:41 kube-scheduler
-rw-r--r-- 1 root 11M Nov 1 15:23 runc.arm64
```
### Install kubectl
@ -112,8 +106,8 @@ kubectl version --client
```
```text
Client Version: v1.28.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Client Version: v1.31.2
Kustomize Version: v5.4.2
```
At this point the `jumpbox` has been set up with all the command line tools and utilities necessary to complete the labs in this tutorial.

View File

@ -34,13 +34,13 @@ SSH will be used to configure the machines in the cluster. Verify that you have
If `root` SSH access is enabled for each of your machines you can skip this section.
By default, a new `debian` install disables SSH access for the `root` user. This is done for security reasons as the `root` user is a well known user on Linux systems, and if a weak password is used on a machine connected to the internet, well, let's just say it's only a matter of time before your machine belongs to someone else. As mention earlier, we are going to enable `root` access over SSH in order to streamline the steps in this tutorial. Security is a tradeoff, and in this case, we are optimizing for convenience. On each machine login via SSH using your user account, then switch to the `root` user using the `su` command:
By default, a new `debian` install disables SSH access for the `root` user. This is done for security reasons as the `root` user has total administrative control of unix-like systems. If a weak password is used on a machine connected to the internet, well, let's just say it's only a matter of time before your machine belongs to someone else. As mentioned earlier, we are going to enable `root` access over SSH in order to streamline the steps in this tutorial. Security is a tradeoff, and in this case, we are optimizing for convenience. Log on to each machine via SSH using your user account, then switch to the `root` user using the `su` command:
```bash
su - root
```
Edit the `/etc/ssh/sshd_config` SSH daemon configuration file and the `PermitRootLogin` option to `yes`:
Edit the `/etc/ssh/sshd_config` SSH daemon configuration file and set the `PermitRootLogin` option to `yes`:
```bash
sed -i \
@ -97,7 +97,7 @@ aarch64 GNU/Linux
## Hostnames
In this section you will assign hostnames to the `server`, `node-0`, and `node-1` machines. The hostname will be used when executing commands from the `jumpbox` to each machine. The hostname also play a major role within the cluster. Instead of Kubernetes clients using an IP address to issue commands to the Kubernetes API server, those client will use the `server` hostname instead. Hostnames are also used by each worker machine, `node-0` and `node-1` when registering with a given Kubernetes cluster.
In this section you will assign hostnames to the `server`, `node-0`, and `node-1` machines. The hostname will be used when executing commands from the `jumpbox` to each machine. The hostname also plays a major role within the cluster. Instead of Kubernetes clients using an IP address to issue commands to the Kubernetes API server, those clients will use the `server` hostname instead. Hostnames are also used by each worker machine, `node-0` and `node-1` when registering with a given Kubernetes cluster.
To configure the hostname for each machine, run the following commands on the `jumpbox`.
@ -125,9 +125,9 @@ node-0.kubernetes.local
node-1.kubernetes.local
```
## DNS
## Host Lookup Table
In this section you will generate a DNS `hosts` file which will be appended to `jumpbox` local `/etc/hosts` file and to the `/etc/hosts` file of all three machines used for this tutorial. This will allow each machine to be reachable using a hostname such as `server`, `node-0`, or `node-1`.
In this section you will generate a `hosts` file which will be appended to `/etc/hosts` file on `jumpbox` and to the `/etc/hosts` files on all three cluster members used for this tutorial. This will allow each machine to be reachable using a hostname such as `server`, `node-0`, or `node-1`.
Create a new `hosts` file and add a header to identify the machines being added:
@ -136,7 +136,7 @@ echo "" > hosts
echo "# Kubernetes The Hard Way" >> hosts
```
Generate a DNS entry for each machine in the `machines.txt` file and append it to the `hosts` file:
Generate a host entry for each machine in the `machines.txt` file and append it to the `hosts` file:
```bash
while read IP FQDN HOST SUBNET; do
@ -145,7 +145,7 @@ while read IP FQDN HOST SUBNET; do
done < machines.txt
```
Review the DNS entries in the `hosts` file:
Review the host entries in the `hosts` file:
```bash
cat hosts
@ -159,7 +159,7 @@ XXX.XXX.XXX.XXX node-0.kubernetes.local node-0
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1
```
## Adding DNS Entries To A Local Machine
## Adding `/etc/hosts` Entries To A Local Machine
In this section you will append the DNS entries from the `hosts` file to the local `/etc/hosts` file on your `jumpbox` machine.
@ -206,9 +206,9 @@ node-0 aarch64 GNU/Linux
node-1 aarch64 GNU/Linux
```
## Adding DNS Entries To The Remote Machines
## Adding `/etc/hosts` Entries To The Remote Machines
In this section you will append the DNS entries from `hosts` to `/etc/hosts` on each machine listed in the `machines.txt` text file.
In this section you will append the host entries from `hosts` to `/etc/hosts` on each machine listed in the `machines.txt` text file.
Copy the `hosts` file to each machine and append the contents to `/etc/hosts`:
@ -220,6 +220,6 @@ while read IP FQDN HOST SUBNET; do
done < machines.txt
```
At this point hostnames can be used when connecting to machines from your `jumpbox` machine, or any of the three machines in the Kubernetes cluster. Instead of using IP addresess you can now connect to machines using a hostname such as `server`, `node-0`, or `node-1`.
At this point hostnames can be used when connecting to machines from your `jumpbox` machine, or any of the three machines in the Kubernetes cluster. Instead of using IP addresses you can now connect to machines using a hostname such as `server`, `node-0`, or `node-1`.
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)

View File

@ -14,7 +14,7 @@ cat ca.conf
You don't need to understand everything in the `ca.conf` file to complete this tutorial, but you should consider it a starting point for learning `openssl` and the configuration that goes into managing certificates at a high level.
Every certificate authority starts with a private key and root certificate. In this section we are going to create a self-signed certificate authority, and while that's all we need for this tutorial, this shouldn't be considered something you would do in a real-world production level environment.
Every certificate authority starts with a private key and root certificate. In this section we are going to create a self-signed certificate authority, and while that's all we need for this tutorial, this shouldn't be considered something you would do in a real-world production environment.
Generate the CA configuration file, certificate, and private key:
@ -75,7 +75,7 @@ ls -1 *.crt *.key *.csr
## Distribute the Client and Server Certificates
In this section you will copy the various certificates to each machine under a directory that each Kubernetes components will search for the certificate pair. In a real-world environment these certificates should be treated like a set of sensitive secrets as they are often used as credentials by the Kubernetes components to authenticate to each other.
In this section you will copy the various certificates to every machine at a path where each Kubernetes component will search for its certificate pair. In a real-world environment these certificates should be treated like a set of sensitive secrets as they are used as credentials by the Kubernetes components to authenticate to each other.
Copy the appropriate certificates and private keys to the `node-0` and `node-1` machines:

View File

@ -1,6 +1,6 @@
# Generating Kubernetes Configuration Files for Authentication
In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
In this lab you will generate [Kubernetes client configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), typically called kubeconfigs, which configure Kubernetes clients to connect and authenticate to Kubernetes API Servers.
## Client Authentication Configs
@ -12,7 +12,7 @@ When generating kubeconfig files for Kubelets the client certificate matching th
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
Generate a kubeconfig file the node-0 worker node:
Generate a kubeconfig file for the node-0 worker node:
```bash
for host in node-0 node-1; do

View File

@ -8,7 +8,7 @@ Copy `etcd` binaries and systemd unit files to the `server` instance:
```bash
scp \
downloads/etcd-v3.4.27-linux-arm64.tar.gz \
downloads/etcd-v3.4.34-linux-arm64.tar.gz \
units/etcd.service \
root@server:~/
```
@ -27,8 +27,8 @@ Extract and install the `etcd` server and the `etcdctl` command line utility:
```bash
{
tar -xvf etcd-v3.4.27-linux-arm64.tar.gz
mv etcd-v3.4.27-linux-arm64/etcd* /usr/local/bin/
tar -xvf etcd-v3.4.34-linux-arm64.tar.gz
mv etcd-v3.4.34-linux-arm64/etcd* /usr/local/bin/
}
```

View File

@ -1,10 +1,10 @@
# Bootstrapping the Kubernetes Control Plane
In this lab you will bootstrap the Kubernetes control plane. The following components will be installed the controller machine: Kubernetes API Server, Scheduler, and Controller Manager.
In this lab you will bootstrap the Kubernetes control plane. The following components will be installed on the controller machine: Kubernetes API Server, Scheduler, and Controller Manager.
## Prerequisites
Copy Kubernetes binaries and systemd unit files to the `server` instance:
Connect to the `jumpbox` and copy Kubernetes binaries and systemd unit files to the `server` instance:
```bash
scp \
@ -166,12 +166,12 @@ curl -k --cacert ca.crt https://server.kubernetes.local:6443/version
```text
{
"major": "1",
"minor": "28",
"gitVersion": "v1.28.3",
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
"minor": "31",
"gitVersion": "v1.31.2",
"gitCommit": "5864a4677267e6adeae276ad85882a8714d69d9d",
"gitTreeState": "clean",
"buildDate": "2023-10-18T11:33:18Z",
"goVersion": "go1.20.10",
"buildDate": "2024-10-22T20:28:14Z",
"goVersion": "go1.22.8",
"compiler": "gc",
"platform": "linux/arm64"
}

View File

@ -24,9 +24,9 @@ done
for host in node-0 node-1; do
scp \
downloads/runc.arm64 \
downloads/crictl-v1.28.0-linux-arm.tar.gz \
downloads/cni-plugins-linux-arm64-v1.3.0.tgz \
downloads/containerd-1.7.8-linux-arm64.tar.gz \
downloads/crictl-v1.31.1-linux-arm64.tar.gz \
downloads/cni-plugins-linux-arm64-v1.6.0.tgz \
downloads/containerd-2.0.0-linux-arm64.tar.gz \
downloads/kubectl \
downloads/kubelet \
downloads/kube-proxy \
@ -95,9 +95,9 @@ Install the worker binaries:
```bash
{
mkdir -p containerd
tar -xvf crictl-v1.28.0-linux-arm.tar.gz
tar -xvf containerd-1.7.8-linux-arm64.tar.gz -C containerd
tar -xvf cni-plugins-linux-arm64-v1.3.0.tgz -C /opt/cni/bin/
tar -xvf crictl-v1.31.1-linux-arm64.tar.gz
tar -xvf containerd-2.0.0-linux-arm64.tar.gz -C containerd
tar -xvf cni-plugins-linux-arm64-v1.6.0.tgz -C /opt/cni/bin/
mv runc.arm64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
@ -169,8 +169,8 @@ ssh root@server \
```
NAME STATUS ROLES AGE VERSION
node-0 Ready <none> 1m v1.28.3
node-1 Ready <none> 10s v1.28.3
node-0 Ready <none> 1m v1.31.2
node-1 Ready <none> 10s v1.31.2
```
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)

View File

@ -18,12 +18,12 @@ curl -k --cacert ca.crt \
```text
{
"major": "1",
"minor": "28",
"gitVersion": "v1.28.3",
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
"minor": "31",
"gitVersion": "v1.31.2",
"gitCommit": "5864a4677267e6adeae276ad85882a8714d69d9d",
"gitTreeState": "clean",
"buildDate": "2023-10-18T11:33:18Z",
"goVersion": "go1.20.10",
"buildDate": "2024-10-22T20:28:14Z",
"goVersion": "go1.22.8",
"compiler": "gc",
"platform": "linux/arm64"
}
@ -61,9 +61,9 @@ kubectl version
```
```text
Client Version: v1.28.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.3
Client Version: v1.31.2
Kustomize Version: v5.4.2
Server Version: v1.31.2
```
List the nodes in the remote Kubernetes cluster:
@ -74,8 +74,8 @@ kubectl get nodes
```
NAME STATUS ROLES AGE VERSION
node-0 Ready <none> 30m v1.28.3
node-1 Ready <none> 35m v1.28.3
node-0 Ready <none> 30m v1.31.2
node-1 Ready <none> 35m v1.31.2
```
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)

View File

@ -100,15 +100,14 @@ curl --head http://127.0.0.1:8080
```text
HTTP/1.1 200 OK
Server: nginx/1.25.3
Date: Sun, 29 Oct 2023 01:44:32 GMT
Server: nginx/1.27.2
Date: Thu, 14 Nov 2024 00:16:32 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 24 Oct 2023 13:46:47 GMT
Last-Modified: Wed, 02 Oct 2024 15:13:19 GMT
Connection: keep-alive
ETag: "6537cac7-267"
ETag: "66fd630f-267"
Accept-Ranges: bytes
```
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
@ -132,7 +131,7 @@ kubectl logs $POD_NAME
```text
...
127.0.0.1 - - [01/Nov/2023:06:10:17 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.88.1" "-"
127.0.0.1 - - [14/Nov/2024:00:16:32 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.88.1" "-"
```
### Exec
@ -146,7 +145,7 @@ kubectl exec -ti $POD_NAME -- nginx -v
```
```text
nginx version: nginx/1.25.3
nginx version: nginx/1.27.2
```
## Services
@ -169,6 +168,8 @@ NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
```
Make an HTTP request using the IP address and the `nginx` node port:
```bash

View File

@ -4,4 +4,8 @@ In this lab you will delete the compute resources created during this tutorial.
## Compute Instances
Delete the controller and worker compute instances.
Previous versions of this guide made use of GCP resources for various aspects of compute and networking. The current version is agnostic, and all configuration is performed on the `jumpbox`, `server`, or nodes.
Clean up is as simple as deleting all virtual machines you created for this exercise.
Next: [Start Over](../README.md)

View File

@ -1,11 +1,11 @@
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kubectl
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-apiserver
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-controller-manager
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-scheduler
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.28.0/crictl-v1.28.0-linux-arm.tar.gz
https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.arm64
https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz
https://github.com/containerd/containerd/releases/download/v1.7.8/containerd-1.7.8-linux-arm64.tar.gz
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-proxy
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kubelet
https://github.com/etcd-io/etcd/releases/download/v3.4.27/etcd-v3.4.27-linux-arm64.tar.gz
https://dl.k8s.io/v1.31.2/bin/linux/arm64/kubectl
https://dl.k8s.io/v1.31.2/bin/linux/arm64/kube-apiserver
https://dl.k8s.io/v1.31.2/bin/linux/arm64/kube-controller-manager
https://dl.k8s.io/v1.31.2/bin/linux/arm64/kube-scheduler
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.31.1/crictl-v1.31.1-linux-arm64.tar.gz
https://github.com/opencontainers/runc/releases/download/v1.2.1/runc.arm64
https://github.com/containernetworking/plugins/releases/download/v1.6.0/cni-plugins-linux-arm64-v1.6.0.tgz
https://github.com/containerd/containerd/releases/download/v2.0.0/containerd-2.0.0-linux-arm64.tar.gz
https://dl.k8s.io/v1.31.2/bin/linux/arm64/kube-proxy
https://dl.k8s.io/v1.31.2/bin/linux/arm64/kubelet
https://github.com/etcd-io/etcd/releases/download/v3.4.34/etcd-v3.4.34-linux-arm64.tar.gz