11 Commits

Author SHA1 Message Date
Kelsey Hightower
52eb26dad1 support arm64 and amd64 2025-04-09 23:11:35 -07:00
Kelsey Hightower
b2bf9fb2f6 add amd64 binaries 2025-04-08 08:44:52 -07:00
Kelsey Hightower
d1f7e159e1 document control plane status checks 2025-04-08 07:42:00 -07:00
Kelsey Hightower
b6e493e463 generate kube-apiserver server certificate 2025-04-08 07:30:28 -07:00
Kelsey Hightower
f377d2ad74 etcd supports arm64 2025-04-08 07:15:21 -07:00
Kelsey Hightower
86d51471b4 bridge CNI networking works with iptables 2025-04-07 17:46:00 -07:00
Kelsey Hightower
ea9178edae set kubelet hostname 2025-04-07 17:08:56 -07:00
Kelsey Hightower
c9690e523a bootstrap a single node etcd cluster 2025-04-06 19:12:43 -07:00
Kelsey Hightower
08b198f2a0 Update to Kubernetes 1.32.3 2025-04-06 18:32:30 -07:00
Elson Rodriguez
5a325c23d7 Updating software components to latest stable releases. Fix missing config, minor spelling/grammar/flow fixes.
The main purpose of this update is to make sure the guide still works with the newest version of all software. In running through the guide I found places to make bug fixes and minor improvements.
2024-11-20 23:03:00 -07:00
Kelsey Hightower
a9cb5f7ba5 Remove cloud provider and move to ARM64 2024-04-06 13:02:06 -07:00
22 changed files with 339 additions and 239 deletions

4
.gitignore vendored
View File

@@ -8,7 +8,7 @@ ca-csr.json
ca-key.pem
ca.csr
ca.pem
encryption-config.yaml
/encryption-config.yaml
kube-controller-manager-csr.json
kube-controller-manager-key.pem
kube-controller-manager.csr
@@ -48,4 +48,4 @@ service-account.csr
service-account.pem
service-account-csr.json
*.swp
.idea/
.idea/

View File

@@ -19,14 +19,14 @@ Kubernetes The Hard Way guides you through bootstrapping a basic Kubernetes clus
Component versions:
* [kubernetes](https://github.com/kubernetes/kubernetes) v1.28.x
* [containerd](https://github.com/containerd/containerd) v1.7.x
* [cni](https://github.com/containernetworking/cni) v1.3.x
* [etcd](https://github.com/etcd-io/etcd) v3.4.x
* [kubernetes](https://github.com/kubernetes/kubernetes) v1.32.x
* [containerd](https://github.com/containerd/containerd) v2.1.x
* [cni](https://github.com/containernetworking/cni) v1.6.x
* [etcd](https://github.com/etcd-io/etcd) v3.6.x
## Labs
This tutorial requires four (4) ARM64 based virtual or physical machines connected to the same network. While ARM64 based machines are used for the tutorial, the lessons learned can be applied to other platforms.
This tutorial requires four (4) ARM64 or AMD64 based virtual or physical machines connected to the same network.
* [Prerequisites](docs/01-prerequisites.md)
* [Setting up the Jumpbox](docs/02-jumpbox.md)

View File

@@ -124,7 +124,7 @@ extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Kube Controller Manager Certificate"
subjectAltName = DNS:kube-proxy, IP:127.0.0.1
subjectAltName = DNS:kube-controller-manager, IP:127.0.0.1
subjectKeyIdentifier = hash
[kube-controller-manager_distinguished_name]
@@ -174,8 +174,8 @@ req_extensions = kube-api-server_req_extensions
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Kube Scheduler Certificate"
nsCertType = client, server
nsComment = "Kube API Server Certificate"
subjectAltName = @kube-api-server_alt_names
subjectKeyIdentifier = hash
@@ -203,4 +203,4 @@ extendedKeyUsage = clientAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Admin Client Certificate"
subjectKeyIdentifier = hash
subjectKeyIdentifier = hash

View File

@@ -0,0 +1,11 @@
kind: EncryptionConfiguration
apiVersion: apiserver.config.k8s.io/v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}

View File

@@ -1,5 +1,6 @@
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: "0.0.0.0"
authentication:
anonymous:
enabled: false
@@ -9,13 +10,16 @@ authentication:
clientCAFile: "/var/lib/kubelet/ca.crt"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
cgroupDriver: systemd
containerRuntimeEndpoint: "unix:///var/run/containerd/containerd.sock"
podCIDR: "SUBNET"
enableServer: true
failSwapOn: false
maxPods: 16
memorySwap:
swapBehavior: NoSwap
port: 10250
resolvConf: "/etc/resolv.conf"
registerNode: true
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/kubelet.crt"
tlsPrivateKeyFile: "/var/lib/kubelet/kubelet.key"
tlsPrivateKeyFile: "/var/lib/kubelet/kubelet.key"

View File

@@ -4,7 +4,7 @@ In this lab you will review the machine requirements necessary to follow this tu
## Virtual or Physical Machines
This tutorial requires four (4) virtual or physical ARM64 machines running Debian 12 (bookworm). The follow table list the four machines and thier CPU, memory, and storage requirements.
This tutorial requires four (4) virtual or physical ARM64 or AMD64 machines running Debian 12 (bookworm). The following table lists the four machines and their CPU, memory, and storage requirements.
| Name | Description | CPU | RAM | Storage |
|---------|------------------------|-----|-------|---------|
@@ -13,18 +13,21 @@ This tutorial requires four (4) virtual or physical ARM64 machines running Debia
| node-0 | Kubernetes worker node | 1 | 2GB | 20GB |
| node-1 | Kubernetes worker node | 1 | 2GB | 20GB |
How you provision the machines is up to you, the only requirement is that each machine meet the above system requirements including the machine specs and OS version. Once you have all four machine provisioned, verify the system requirements by running the `uname` command on each machine:
How you provision the machines is up to you, the only requirement is that each machine meet the above system requirements including the machine specs and OS version. Once you have all four machines provisioned, verify the OS requirements by viewing the `/etc/os-release` file:
```bash
uname -mov
```bash
cat /etc/os-release
```
After running the `uname` command you should see the following output:
You should see something similar to the following output:
```text
#1 SMP Debian 6.1.55-1 (2023-09-29) aarch64 GNU/Linux
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
```
You maybe surprised to see `aarch64` here, but that is the official name for the Arm Architecture 64-bit instruction set. You will often see `arm64` used by Apple, and the maintainers of the Linux kernel, when referring to support for `aarch64`. This tutorial will use `arm64` consistently throughout to avoid confusion.
Next: [setting-up-the-jumpbox](02-jumpbox.md)

View File

@@ -1,8 +1,8 @@
# Set Up The Jumpbox
In this lab you will set up one of the four machines to be a `jumpbox`. This machine will be used to run commands in this tutorial. While a dedicated machine is being used to ensure consistency, these commands can also be run from just about any machine including your personal workstation running macOS or Linux.
In this lab you will set up one of the four machines to be a `jumpbox`. This machine will be used to run commands throughout this tutorial. While a dedicated machine is being used to ensure consistency, these commands can also be run from just about any machine including your personal workstation running macOS or Linux.
Think of the `jumpbox` as the administration machine that you will use as a home base when setting up your Kubernetes cluster from the ground up. One thing we need to do before we get started is install a few command line utilities and clone the Kubernetes The Hard Way git repository, which contains some additional configuration files that will be used to configure various Kubernetes components throughout this tutorial.
Think of the `jumpbox` as the administration machine that you will use as a home base when setting up your Kubernetes cluster from the ground up. Before we get started we need to install a few command line utilities and clone the Kubernetes The Hard Way git repository, which contains some additional configuration files that will be used to configure various Kubernetes components throughout this tutorial.
Log in to the `jumpbox`:
@@ -14,10 +14,13 @@ All commands will be run as the `root` user. This is being done for the sake of
### Install Command Line Utilities
Now that you are logged into the `jumpbox` machine as the `root` user, you will install the command line utilities that will be used to preform various tasks throughout the tutorial.
Now that you are logged into the `jumpbox` machine as the `root` user, you will install the command line utilities that will be used to preform various tasks throughout the tutorial.
```bash
apt-get -y install wget curl vim openssl git
{
apt-get update
apt-get -y install wget curl vim openssl git
}
```
### Sync GitHub Repository
@@ -49,59 +52,75 @@ pwd
In this section you will download the binaries for the various Kubernetes components. The binaries will be stored in the `downloads` directory on the `jumpbox`, which will reduce the amount of internet bandwidth required to complete this tutorial as we avoid downloading the binaries multiple times for each machine in our Kubernetes cluster.
From the `kubernetes-the-hard-way` directory create a `downloads` directory using the `mkdir` command:
The binaries that will be downloaded are listed in either the `downloads-amd64.txt` or `downloads-arm64.txt` file depending on your hardware architecture, which you can review using the `cat` command:
```bash
mkdir downloads
cat downloads-$(dpkg --print-architecture).txt
```
The binaries that will be downloaded are listed in the `downloads.txt` file, which you can review using the `cat` command:
```bash
cat downloads.txt
```
Download the binaries listed in the `downloads.txt` file using the `wget` command:
Download the binaries into a directory called `downloads` using the `wget` command:
```bash
wget -q --show-progress \
--https-only \
--timestamping \
-P downloads \
-i downloads.txt
-i downloads-$(dpkg --print-architecture).txt
```
Depending on your internet connection speed it may take a while to download the `584` megabytes of binaries, and once the download is complete, you can list them using the `ls` command:
Depending on your internet connection speed it may take a while to download over `500` megabytes of binaries, and once the download is complete, you can list them using the `ls` command:
```bash
ls -loh downloads
ls -oh downloads
```
```text
total 584M
-rw-r--r-- 1 root 41M May 9 13:35 cni-plugins-linux-arm64-v1.3.0.tgz
-rw-r--r-- 1 root 34M Oct 26 15:21 containerd-1.7.8-linux-arm64.tar.gz
-rw-r--r-- 1 root 22M Aug 14 00:19 crictl-v1.28.0-linux-arm.tar.gz
-rw-r--r-- 1 root 15M Jul 11 02:30 etcd-v3.4.27-linux-arm64.tar.gz
-rw-r--r-- 1 root 111M Oct 18 07:34 kube-apiserver
-rw-r--r-- 1 root 107M Oct 18 07:34 kube-controller-manager
-rw-r--r-- 1 root 51M Oct 18 07:34 kube-proxy
-rw-r--r-- 1 root 52M Oct 18 07:34 kube-scheduler
-rw-r--r-- 1 root 46M Oct 18 07:34 kubectl
-rw-r--r-- 1 root 101M Oct 18 07:34 kubelet
-rw-r--r-- 1 root 9.6M Aug 10 18:57 runc.arm64
Extract the component binaries from the release archives and organize them under the `downloads` directory.
```bash
{
ARCH=$(dpkg --print-architecture)
mkdir -p downloads/{client,cni-plugins,controller,worker}
tar -xvf downloads/crictl-v1.32.0-linux-${ARCH}.tar.gz \
-C downloads/worker/
tar -xvf downloads/containerd-2.1.0-beta.0-linux-${ARCH}.tar.gz \
--strip-components 1 \
-C downloads/worker/
tar -xvf downloads/cni-plugins-linux-${ARCH}-v1.6.2.tgz \
-C downloads/cni-plugins/
tar -xvf downloads/etcd-v3.6.0-rc.3-linux-${ARCH}.tar.gz \
-C downloads/ \
--strip-components 1 \
etcd-v3.6.0-rc.3-linux-${ARCH}/etcdctl \
etcd-v3.6.0-rc.3-linux-${ARCH}/etcd
mv downloads/{etcdctl,kubectl} downloads/client/
mv downloads/{etcd,kube-apiserver,kube-controller-manager,kube-scheduler} \
downloads/controller/
mv downloads/{kubelet,kube-proxy} downloads/worker/
mv downloads/runc.${ARCH} downloads/worker/runc
}
```
```bash
rm -rf downloads/*gz
```
Make the binaries executable.
```bash
{
chmod +x downloads/{client,cni-plugins,controller,worker}/*
}
```
### Install kubectl
In this section you will install the `kubectl`, the official Kubernetes client command line tool, on the `jumpbox` machine. `kubectl will be used to interact with the Kubernetes control once your cluster is provisioned later in this tutorial.
In this section you will install the `kubectl`, the official Kubernetes client command line tool, on the `jumpbox` machine. `kubectl` will be used to interact with the Kubernetes control plane once your cluster is provisioned later in this tutorial.
Use the `chmod` command to make the `kubectl` binary executable and move it to the `/usr/local/bin/` directory:
```bash
{
chmod +x downloads/kubectl
cp downloads/kubectl /usr/local/bin/
cp downloads/client/kubectl /usr/local/bin/
}
```
@@ -112,8 +131,8 @@ kubectl version --client
```
```text
Client Version: v1.28.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Client Version: v1.32.3
Kustomize Version: v5.5.0
```
At this point the `jumpbox` has been set up with all the command line tools and utilities necessary to complete the labs in this tutorial.

View File

@@ -10,7 +10,7 @@ This tutorial will leverage a text file, which will serve as a machine database,
IPV4_ADDRESS FQDN HOSTNAME POD_SUBNET
```
Each of the columns corresponds to a machine IP address `IPV4_ADDRESS`, fully qualified domain name `FQDN`, host name `HOSTNAME`, and the IP subnet `POD_SUBNET`. Kubernetes assigns one IP address per `pod` and the `POD_SUBNET` represents the unique IP address range assigned to each machine in the cluster for doing so.
Each of the columns corresponds to a machine IP address `IPV4_ADDRESS`, fully qualified domain name `FQDN`, host name `HOSTNAME`, and the IP subnet `POD_SUBNET`. Kubernetes assigns one IP address per `pod` and the `POD_SUBNET` represents the unique IP address range assigned to each machine in the cluster for doing so.
Here is an example machine database similar to the one used when creating this tutorial. Notice the IP addresses have been masked out. Your machines can be assigned any IP address as long as each machine is reachable from each other and the `jumpbox`.
@@ -19,12 +19,12 @@ cat machines.txt
```
```text
XXX.XXX.XXX.XXX server.kubernetes.local server
XXX.XXX.XXX.XXX server.kubernetes.local server
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0 10.200.0.0/24
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1 10.200.1.0/24
```
Now it's your turn to create a `machines.txt` file with the details for the three machines you will be using to create your Kubernetes cluster. Use the example machine database from above and add the details for your machines.
Now it's your turn to create a `machines.txt` file with the details for the three machines you will be using to create your Kubernetes cluster. Use the example machine database from above and add the details for your machines.
## Configuring SSH Access
@@ -34,17 +34,17 @@ SSH will be used to configure the machines in the cluster. Verify that you have
If `root` SSH access is enabled for each of your machines you can skip this section.
By default, a new `debian` install disables SSH access for the `root` user. This is done for security reasons as the `root` user is a well known user on Linux systems, and if a weak password is used on a machine connected to the internet, well, let's just say it's only a matter of time before your machine belongs to someone else. As mention earlier, we are going to enable `root` access over SSH in order to streamline the steps in this tutorial. Security is a tradeoff, and in this case, we are optimizing for convenience. On each machine login via SSH using your user account, then switch to the `root` user using the `su` command:
By default, a new `debian` install disables SSH access for the `root` user. This is done for security reasons as the `root` user has total administrative control of unix-like systems. If a weak password is used on a machine connected to the internet, well, let's just say it's only a matter of time before your machine belongs to someone else. As mentioned earlier, we are going to enable `root` access over SSH in order to streamline the steps in this tutorial. Security is a tradeoff, and in this case, we are optimizing for convenience. Log on to each machine via SSH using your user account, then switch to the `root` user using the `su` command:
```bash
su - root
```
Edit the `/etc/ssh/sshd_config` SSH daemon configuration file and the `PermitRootLogin` option to `yes`:
Edit the `/etc/ssh/sshd_config` SSH daemon configuration file and set the `PermitRootLogin` option to `yes`:
```bash
sed -i \
's/^#PermitRootLogin.*/PermitRootLogin yes/' \
's/^#*PermitRootLogin.*/PermitRootLogin yes/' \
/etc/ssh/sshd_config
```
@@ -66,9 +66,9 @@ ssh-keygen
```text
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
```
@@ -76,7 +76,7 @@ Your public key has been saved in /root/.ssh/id_rsa.pub
Copy the SSH public key to each machine:
```bash
while read IP FQDN HOST SUBNET; do
while read IP FQDN HOST SUBNET; do
ssh-copy-id root@${IP}
done < machines.txt
```
@@ -84,30 +84,31 @@ done < machines.txt
Once each key is added, verify SSH public key access is working:
```bash
while read IP FQDN HOST SUBNET; do
ssh -n root@${IP} uname -o -m
while read IP FQDN HOST SUBNET; do
ssh -n root@${IP} hostname
done < machines.txt
```
```text
aarch64 GNU/Linux
aarch64 GNU/Linux
aarch64 GNU/Linux
server
node-0
node-1
```
## Hostnames
In this section you will assign hostnames to the `server`, `node-0`, and `node-1` machines. The hostname will be used when executing commands from the `jumpbox` to each machine. The hostname also play a major role within the cluster. Instead of Kubernetes clients using an IP address to issue commands to the Kubernetes API server, those client will use the `server` hostname instead. Hostnames are also used by each worker machine, `node-0` and `node-1` when registering with a given Kubernetes cluster.
In this section you will assign hostnames to the `server`, `node-0`, and `node-1` machines. The hostname will be used when executing commands from the `jumpbox` to each machine. The hostname also plays a major role within the cluster. Instead of Kubernetes clients using an IP address to issue commands to the Kubernetes API server, those clients will use the `server` hostname instead. Hostnames are also used by each worker machine, `node-0` and `node-1` when registering with a given Kubernetes cluster.
To configure the hostname for each machine, run the following commands on the `jumpbox`.
Set the hostname on each machine listed in the `machines.txt` file:
```bash
while read IP FQDN HOST SUBNET; do
while read IP FQDN HOST SUBNET; do
CMD="sed -i 's/^127.0.1.1.*/127.0.1.1\t${FQDN} ${HOST}/' /etc/hosts"
ssh -n root@${IP} "$CMD"
ssh -n root@${IP} hostnamectl hostname ${HOST}
ssh -n root@${IP} hostnamectl set-hostname ${HOST}
ssh -n root@${IP} systemctl restart systemd-hostnamed
done < machines.txt
```
@@ -125,9 +126,9 @@ node-0.kubernetes.local
node-1.kubernetes.local
```
## DNS
## Host Lookup Table
In this section you will generate a DNS `hosts` file which will be appended to `jumpbox` local `/etc/hosts` file and to the `/etc/hosts` file of all three machines used for this tutorial. This will allow each machine to be reachable using a hostname such as `server`, `node-0`, or `node-1`.
In this section you will generate a `hosts` file which will be appended to `/etc/hosts` file on the `jumpbox` and to the `/etc/hosts` files on all three cluster members used for this tutorial. This will allow each machine to be reachable using a hostname such as `server`, `node-0`, or `node-1`.
Create a new `hosts` file and add a header to identify the machines being added:
@@ -136,16 +137,16 @@ echo "" > hosts
echo "# Kubernetes The Hard Way" >> hosts
```
Generate a DNS entry for each machine in the `machines.txt` file and append it to the `hosts` file:
Generate a host entry for each machine in the `machines.txt` file and append it to the `hosts` file:
```bash
while read IP FQDN HOST SUBNET; do
while read IP FQDN HOST SUBNET; do
ENTRY="${IP} ${FQDN} ${HOST}"
echo $ENTRY >> hosts
done < machines.txt
```
Review the DNS entries in the `hosts` file:
Review the host entries in the `hosts` file:
```bash
cat hosts
@@ -159,7 +160,7 @@ XXX.XXX.XXX.XXX node-0.kubernetes.local node-0
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1
```
## Adding DNS Entries To A Local Machine
## Adding `/etc/hosts` Entries To A Local Machine
In this section you will append the DNS entries from the `hosts` file to the local `/etc/hosts` file on your `jumpbox` machine.
@@ -184,8 +185,6 @@ cat /etc/hosts
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
# Kubernetes The Hard Way
XXX.XXX.XXX.XXX server.kubernetes.local server
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0
@@ -196,19 +195,19 @@ At this point you should be able to SSH to each machine listed in the `machines.
```bash
for host in server node-0 node-1
do ssh root@${host} uname -o -m -n
do ssh root@${host} hostname
done
```
```text
server aarch64 GNU/Linux
node-0 aarch64 GNU/Linux
node-1 aarch64 GNU/Linux
server
node-0
node-1
```
## Adding DNS Entries To The Remote Machines
## Adding `/etc/hosts` Entries To The Remote Machines
In this section you will append the DNS entries from `hosts` to `/etc/hosts` on each machine listed in the `machines.txt` text file.
In this section you will append the host entries from `hosts` to `/etc/hosts` on each machine listed in the `machines.txt` text file.
Copy the `hosts` file to each machine and append the contents to `/etc/hosts`:
@@ -220,6 +219,6 @@ while read IP FQDN HOST SUBNET; do
done < machines.txt
```
At this point hostnames can be used when connecting to machines from your `jumpbox` machine, or any of the three machines in the Kubernetes cluster. Instead of using IP addresess you can now connect to machines using a hostname such as `server`, `node-0`, or `node-1`.
At this point, hostnames can be used when connecting to machines from your `jumpbox` machine, or any of the three machines in the Kubernetes cluster. Instead of using IP addresses you can now connect to machines using a hostname such as `server`, `node-0`, or `node-1`.
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)

View File

@@ -4,7 +4,7 @@ In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/w
## Certificate Authority
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates for the other Kubernetes components. Setting up CA and generating certificates using `openssl` can be time-consuming, especially when doing it for the first time. To streamline this lab, I've included an openssl configuration file `ca.conf`, which defines all the details needed to generate certificates for each Kubernetes component.
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates for the other Kubernetes components. Setting up CA and generating certificates using `openssl` can be time-consuming, especially when doing it for the first time. To streamline this lab, I've included an openssl configuration file `ca.conf`, which defines all the details needed to generate certificates for each Kubernetes component.
Take a moment to review the `ca.conf` configuration file:
@@ -14,7 +14,7 @@ cat ca.conf
You don't need to understand everything in the `ca.conf` file to complete this tutorial, but you should consider it a starting point for learning `openssl` and the configuration that goes into managing certificates at a high level.
Every certificate authority starts with a private key and root certificate. In this section we are going to create a self-signed certificate authority, and while that's all we need for this tutorial, this shouldn't be considered something you would do in a real-world production level environment.
Every certificate authority starts with a private key and root certificate. In this section we are going to create a self-signed certificate authority, and while that's all we need for this tutorial, this shouldn't be considered something you would do in a real-world production environment.
Generate the CA configuration file, certificate, and private key:
@@ -57,7 +57,7 @@ for i in ${certs[*]}; do
openssl req -new -key "${i}.key" -sha256 \
-config "ca.conf" -section ${i} \
-out "${i}.csr"
openssl x509 -req -days 3653 -in "${i}.csr" \
-copy_extensions copyall \
-sha256 -CA "ca.crt" \
@@ -75,21 +75,21 @@ ls -1 *.crt *.key *.csr
## Distribute the Client and Server Certificates
In this section you will copy the various certificates to each machine under a directory that each Kubernetes components will search for the certificate pair. In a real-world environment these certificates should be treated like a set of sensitive secrets as they are often used as credentials by the Kubernetes components to authenticate to each other.
In this section you will copy the various certificates to every machine at a path where each Kubernetes component will search for its certificate pair. In a real-world environment these certificates should be treated like a set of sensitive secrets as they are used as credentials by the Kubernetes components to authenticate to each other.
Copy the appropriate certificates and private keys to the `node-0` and `node-1` machines:
```bash
for host in node-0 node-1; do
ssh root@$host mkdir /var/lib/kubelet/
scp ca.crt root@$host:/var/lib/kubelet/
scp $host.crt \
root@$host:/var/lib/kubelet/kubelet.crt
scp $host.key \
root@$host:/var/lib/kubelet/kubelet.key
ssh root@${host} mkdir /var/lib/kubelet/
scp ca.crt root@${host}:/var/lib/kubelet/
scp ${host}.crt \
root@${host}:/var/lib/kubelet/kubelet.crt
scp ${host}.key \
root@${host}:/var/lib/kubelet/kubelet.key
done
```

View File

@@ -1,6 +1,6 @@
# Generating Kubernetes Configuration Files for Authentication
In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
In this lab you will generate [Kubernetes client configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), typically called kubeconfigs, which configure Kubernetes clients to connect and authenticate to Kubernetes API Servers.
## Client Authentication Configs
@@ -8,11 +8,11 @@ In this section you will generate kubeconfig files for the `kubelet` and the `ad
### The kubelet Kubernetes Configuration File
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/reference/access-authn-authz/node/).
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
Generate a kubeconfig file the node-0 worker node:
Generate a kubeconfig file for the `node-0` and `node-1` worker nodes:
```bash
for host in node-0 node-1; do
@@ -184,21 +184,21 @@ admin.kubeconfig
## Distribute the Kubernetes Configuration Files
Copy the `kubelet` and `kube-proxy` kubeconfig files to the node-0 instance:
Copy the `kubelet` and `kube-proxy` kubeconfig files to the `node-0` and `node-1` machines:
```bash
for host in node-0 node-1; do
ssh root@$host "mkdir /var/lib/{kube-proxy,kubelet}"
ssh root@${host} "mkdir -p /var/lib/{kube-proxy,kubelet}"
scp kube-proxy.kubeconfig \
root@$host:/var/lib/kube-proxy/kubeconfig \
root@${host}:/var/lib/kube-proxy/kubeconfig \
scp ${host}.kubeconfig \
root@$host:/var/lib/kubelet/kubeconfig
root@${host}:/var/lib/kubelet/kubeconfig
done
```
Copy the `kube-controller-manager` and `kube-scheduler` kubeconfig files to the controller instance:
Copy the `kube-controller-manager` and `kube-scheduler` kubeconfig files to the `server` machine:
```bash
scp admin.kubeconfig \

View File

@@ -1,14 +1,15 @@
# Bootstrapping the etcd Cluster
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/etcd-io/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/etcd-io/etcd). In this lab you will bootstrap a single node etcd cluster.
## Prerequisites
Copy `etcd` binaries and systemd unit files to the `server` instance:
Copy `etcd` binaries and systemd unit files to the `server` machine:
```bash
scp \
downloads/etcd-v3.4.27-linux-arm64.tar.gz \
downloads/controller/etcd \
downloads/client/etcdctl \
units/etcd.service \
root@server:~/
```
@@ -27,8 +28,7 @@ Extract and install the `etcd` server and the `etcdctl` command line utility:
```bash
{
tar -xvf etcd-v3.4.27-linux-arm64.tar.gz
mv etcd-v3.4.27-linux-arm64/etcd* /usr/local/bin/
mv etcd etcdctl /usr/local/bin/
}
```

View File

@@ -1,17 +1,17 @@
# Bootstrapping the Kubernetes Control Plane
In this lab you will bootstrap the Kubernetes control plane. The following components will be installed the controller machine: Kubernetes API Server, Scheduler, and Controller Manager.
In this lab you will bootstrap the Kubernetes control plane. The following components will be installed on the `server` machine: Kubernetes API Server, Scheduler, and Controller Manager.
## Prerequisites
Copy Kubernetes binaries and systemd unit files to the `server` instance:
Connect to the `jumpbox` and copy Kubernetes binaries and systemd unit files to the `server` machine:
```bash
scp \
downloads/kube-apiserver \
downloads/kube-controller-manager \
downloads/kube-scheduler \
downloads/kubectl \
downloads/controller/kube-apiserver \
downloads/controller/kube-controller-manager \
downloads/controller/kube-scheduler \
downloads/client/kubectl \
units/kube-apiserver.service \
units/kube-controller-manager.service \
units/kube-scheduler.service \
@@ -20,7 +20,7 @@ scp \
root@server:~/
```
The commands in this lab must be run on the controller instance: `server`. Login to the controller instance using the `ssh` command. Example:
The commands in this lab must be run on the `server` machine. Login to the `server` machine using the `ssh` command. Example:
```bash
ssh root@server
@@ -40,10 +40,6 @@ Install the Kubernetes binaries:
```bash
{
chmod +x kube-apiserver \
kube-controller-manager \
kube-scheduler kubectl
mv kube-apiserver \
kube-controller-manager \
kube-scheduler kubectl \
@@ -111,10 +107,10 @@ mv kube-scheduler.service /etc/systemd/system/
```bash
{
systemctl daemon-reload
systemctl enable kube-apiserver \
kube-controller-manager kube-scheduler
systemctl start kube-apiserver \
kube-controller-manager kube-scheduler
}
@@ -122,9 +118,28 @@ mv kube-scheduler.service /etc/systemd/system/
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
You can check if any of the control plane components are active using the `systemctl` command. For example, to check if the `kube-apiserver` fully initialized, and active, run the following command:
```bash
systemctl is-active kube-apiserver
```
For a more detailed status check, which includes additional process information and log messages, use the `systemctl status` command:
```bash
systemctl status kube-apiserver
```
If you run into any errors, or want to view the logs for any of the control plane components, use the `journalctl` command. For example, to view the logs for the `kube-apiserver` run the following command:
```bash
journalctl -u kube-apiserver
```
### Verification
At this point the Kubernetes control plane components should be up and running. Verify this using the `kubectl` command line tool:
```bash
kubectl cluster-info \
--kubeconfig admin.kubeconfig
@@ -138,15 +153,15 @@ Kubernetes control plane is running at https://127.0.0.1:6443
In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#checking-api-access) API to determine authorization.
The commands in this section will affect the entire cluster and only need to be run on the controller node.
The commands in this section will affect the entire cluster and only need to be run on the `server` machine.
```bash
ssh root@server
```
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
```bash
kubectl apply -f kube-apiserver-to-kubelet.yaml \
@@ -160,18 +175,19 @@ At this point the Kubernetes control plane is up and running. Run the following
Make a HTTP request for the Kubernetes version info:
```bash
curl -k --cacert ca.crt https://server.kubernetes.local:6443/version
curl --cacert ca.crt \
https://server.kubernetes.local:6443/version
```
```text
{
"major": "1",
"minor": "28",
"gitVersion": "v1.28.3",
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
"minor": "32",
"gitVersion": "v1.32.3",
"gitCommit": "32cc146f75aad04beaaa245a7157eb35063a9f99",
"gitTreeState": "clean",
"buildDate": "2023-10-18T11:33:18Z",
"goVersion": "go1.20.10",
"buildDate": "2025-03-11T19:52:21Z",
"goVersion": "go1.23.6",
"compiler": "gc",
"platform": "linux/arm64"
}

View File

@@ -1,47 +1,51 @@
# Bootstrapping the Kubernetes Worker Nodes
In this lab you will bootstrap two Kubernetes worker nodes. The following components will be installed: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
In this lab you will bootstrap two Kubernetes worker nodes. The following components will be installed: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
## Prerequisites
Copy Kubernetes binaries and systemd unit files to each worker instance:
The commands in this section must be run from the `jumpbox`.
Copy the Kubernetes binaries and systemd unit files to each worker instance:
```bash
for host in node-0 node-1; do
SUBNET=$(grep $host machines.txt | cut -d " " -f 4)
for HOST in node-0 node-1; do
SUBNET=$(grep ${HOST} machines.txt | cut -d " " -f 4)
sed "s|SUBNET|$SUBNET|g" \
configs/10-bridge.conf > 10-bridge.conf
configs/10-bridge.conf > 10-bridge.conf
sed "s|SUBNET|$SUBNET|g" \
configs/kubelet-config.yaml > kubelet-config.yaml
scp 10-bridge.conf kubelet-config.yaml \
root@$host:~/
root@${HOST}:~/
done
```
```bash
for host in node-0 node-1; do
for HOST in node-0 node-1; do
scp \
downloads/runc.arm64 \
downloads/crictl-v1.28.0-linux-arm.tar.gz \
downloads/cni-plugins-linux-arm64-v1.3.0.tgz \
downloads/containerd-1.7.8-linux-arm64.tar.gz \
downloads/kubectl \
downloads/kubelet \
downloads/kube-proxy \
downloads/worker/* \
downloads/client/kubectl \
configs/99-loopback.conf \
configs/containerd-config.toml \
configs/kubelet-config.yaml \
configs/kube-proxy-config.yaml \
units/containerd.service \
units/kubelet.service \
units/kube-proxy.service \
root@$host:~/
root@${HOST}:~/
done
```
The commands in this lab must be run on each worker instance: `node-0`, `node-1`. Login to the worker instance using the `ssh` command. Example:
```bash
for HOST in node-0 node-1; do
scp \
downloads/cni-plugins/* \
root@${HOST}:~/cni-plugins/
done
```
The commands in the next section must be run on each worker instance: `node-0`, `node-1`. Login to the worker instance using the `ssh` command. Example:
```bash
ssh root@node-0
@@ -54,23 +58,23 @@ Install the OS dependencies:
```bash
{
apt-get update
apt-get -y install socat conntrack ipset
apt-get -y install socat conntrack ipset kmod
}
```
> The socat binary enables support for the `kubectl port-forward` command.
### Disable Swap
Disable Swap
By default, the kubelet will fail to start if [swap](https://help.ubuntu.com/community/SwapFaq) is enabled. It is [recommended](https://github.com/kubernetes/kubernetes/issues/7294) that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.
Kubernetes has limited support for the use of swap memory, as it is difficult to provide guarantees and account for pod memory utilization when swap is involved.
Verify if swap is enabled:
Verify if swap is disabled:
```bash
swapon --show
```
If output is empty then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
If output is empty then swap is disabled. If swap is enabled run the following command to disable swap immediately:
```bash
swapoff -a
@@ -94,14 +98,10 @@ Install the worker binaries:
```bash
{
mkdir -p containerd
tar -xvf crictl-v1.28.0-linux-arm.tar.gz
tar -xvf containerd-1.7.8-linux-arm64.tar.gz -C containerd
tar -xvf cni-plugins-linux-arm64-v1.3.0.tgz -C /opt/cni/bin/
mv runc.arm64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
mv containerd/bin/* /bin/
mv crictl kube-proxy kubelet runc \
/usr/local/bin/
mv containerd containerd-shim-runc-v2 containerd-stress /bin/
mv cni-plugins/* /opt/cni/bin/
}
```
@@ -113,6 +113,25 @@ Create the `bridge` network configuration file:
mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
```
To ensure network traffic crossing the CNI `bridge` network is processed by `iptables`, load and configure the `br-netfilter` kernel module:
```bash
{
modprobe br-netfilter
echo "br-netfilter" >> /etc/modules-load.d/modules.conf
}
```
```bash
{
echo "net.bridge.bridge-nf-call-iptables = 1" \
>> /etc/sysctl.d/kubernetes.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" \
>> /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
}
```
### Configure containerd
Install the `containerd` configuration files:
@@ -155,9 +174,21 @@ Create the `kubelet-config.yaml` configuration file:
}
```
Check if the kubelet service is running:
```bash
systemctl is-active kubelet
```
```text
active
```
Be sure to complete the steps in this section on each worker node, `node-0` and `node-1`, before moving on to the next section.
## Verification
The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the `jumpbox` machine.
Run the following commands from the `jumpbox` machine.
List the registered Kubernetes nodes:
@@ -169,8 +200,8 @@ ssh root@server \
```
NAME STATUS ROLES AGE VERSION
node-0 Ready <none> 1m v1.28.3
node-1 Ready <none> 10s v1.28.3
node-0 Ready <none> 1m v1.32.3
node-1 Ready <none> 10s v1.32.3
```
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)

View File

@@ -8,22 +8,22 @@ In this lab you will generate a kubeconfig file for the `kubectl` command line u
Each kubeconfig requires a Kubernetes API Server to connect to.
You should be able to ping `server.kubernetes.local` based on the `/etc/hosts` DNS entry from a previous lap.
You should be able to ping `server.kubernetes.local` based on the `/etc/hosts` DNS entry from a previous lab.
```bash
curl -k --cacert ca.crt \
curl --cacert ca.crt \
https://server.kubernetes.local:6443/version
```
```text
{
"major": "1",
"minor": "28",
"gitVersion": "v1.28.3",
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
"minor": "32",
"gitVersion": "v1.32.3",
"gitCommit": "32cc146f75aad04beaaa245a7157eb35063a9f99",
"gitTreeState": "clean",
"buildDate": "2023-10-18T11:33:18Z",
"goVersion": "go1.20.10",
"buildDate": "2025-03-11T19:52:21Z",
"goVersion": "go1.23.6",
"compiler": "gc",
"platform": "linux/arm64"
}
@@ -61,9 +61,9 @@ kubectl version
```
```text
Client Version: v1.28.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.3
Client Version: v1.32.3
Kustomize Version: v5.5.0
Server Version: v1.32.3
```
List the nodes in the remote Kubernetes cluster:
@@ -73,9 +73,9 @@ kubectl get nodes
```
```
NAME STATUS ROLES AGE VERSION
node-0 Ready <none> 30m v1.28.3
node-1 Ready <none> 35m v1.28.3
NAME STATUS ROLES AGE VERSION
node-0 Ready <none> 10m v1.32.3
node-1 Ready <none> 10m v1.32.3
```
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)

View File

@@ -25,24 +25,24 @@ ssh root@server \
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a 9b 79 a5 b9 49 a2 77 |:v1:key1:.y..I.w|
00000050 c0 6a c9 12 7c b4 c7 c4 64 41 37 97 4a 83 a9 c1 |.j..|...dA7.J...|
00000060 4f 14 ae 73 ab b8 38 26 11 14 0a 40 b8 f3 0e 0a |O..s..8&...@....|
00000070 f5 a7 a2 2c b6 35 b1 83 22 15 aa d0 dd 25 11 3e |...,.5.."....%.>|
00000080 c4 e9 69 1c 10 7a 9d f7 dc 22 28 89 2c 83 dd 0b |..i..z..."(.,...|
00000090 a4 5f 3a 93 0f ff 1f f8 bc 97 43 0e e5 05 5d f9 |._:.......C...].|
000000a0 ef 88 02 80 49 81 f1 58 b0 48 39 19 14 e1 b1 34 |....I..X.H9....4|
000000b0 f6 b0 9b 0a 9c 53 27 2b 23 b9 e6 52 b4 96 81 70 |.....S'+#..R...p|
000000c0 a7 b6 7b 4f 44 d4 9c 07 51 a3 1b 22 96 4c 24 6c |..{OD...Q..".L$l|
000000d0 44 6c db 53 f5 31 e6 3f 15 7b 4c 23 06 c1 37 73 |Dl.S.1.?.{L#..7s|
000000e0 e1 97 8e 4e 1a 2e 2c 1a da 85 c3 ff 42 92 d0 f1 |...N..,.....B...|
000000f0 87 b8 39 89 e8 46 2e b3 56 68 41 b8 1e 29 3d ba |..9..F..VhA..)=.|
00000100 dd d8 27 4c 7f d5 fe 97 3c a3 92 e9 3d ae 47 ee |..'L....<...=.G.|
00000110 24 6a 0b 7c ac b8 28 e6 25 a6 ce 04 80 ee c2 eb |$j.|..(.%.......|
00000120 4c 86 fa 70 66 13 63 59 03 c2 70 57 8b fb a1 d6 |L..pf.cY..pW....|
00000130 f2 58 08 84 43 f3 70 7f ad d8 30 63 3e ef ff b6 |.X..C.p...0c>...|
00000140 b2 06 c3 45 c5 d8 89 d3 47 4a 72 ca 20 9b cf b5 |...E....GJr. ...|
00000150 4b 3d 6d b4 58 ae 42 4b 7f 0a |K=m.X.BK..|
00000040 3a 76 31 3a 6b 65 79 31 3a 4f 1b 80 d8 89 72 f4 |:v1:key1:O....r.|
00000050 60 8a 2c a0 76 1a e1 dc 98 d6 00 7a a4 2f f3 92 |`.,.v......z./..|
00000060 87 63 c9 22 f4 58 c8 27 b9 ff 2c 2e 1a b6 55 be |.c.".X.'..,...U.|
00000070 d5 5c 4d 69 82 2f b7 e4 b3 b0 12 e1 58 c4 9c 77 |.\Mi./......X..w|
00000080 78 0c 1a 90 c9 c1 23 6c 73 8e 6e fd 8e 9c 3d 84 |x.....#ls.n...=.|
00000090 7d bf 69 81 ce c9 aa 38 be 3b dd 66 aa a3 33 27 |}.i....8.;.f..3'|
000000a0 df be 6d ac 1c 6d 8a 82 df b3 19 da 0f 93 94 1e |..m..m..........|
000000b0 e0 7d 46 8d b5 14 d0 c5 97 e2 94 76 26 a8 cb 33 |.}F........v&..3|
000000c0 57 2a d0 27 a6 5a e1 76 a7 3f f0 b7 0a 7b ff 53 |W*.'.Z.v.?...{.S|
000000d0 cf c9 1a 18 5b 45 f8 b1 06 3b a9 45 02 76 23 61 |....[E...;.E.v#a|
000000e0 5e dc 86 cf 8e a4 d3 c9 5c 6a 6f e6 33 7b 5b 8f |^.......\jo.3{[.|
000000f0 fb 8a 14 74 58 f9 49 2f 97 98 cc 5c d4 4a 10 1a |...tX.I/...\.J..|
00000100 64 0a 79 21 68 a0 9e 7a 03 b7 19 e6 20 e4 1b ce |d.y!h..z.... ...|
00000110 91 64 ce 90 d9 4f 86 ca fb 45 2f d6 56 93 68 e1 |.d...O...E/.V.h.|
00000120 0b aa 8c a0 20 a6 97 fa a1 de 07 6d 5b 4c 02 96 |.... ......m[L..|
00000130 31 70 20 83 16 f9 0a 22 5c 63 ad f1 ea 41 a7 1e |1p ...."\c...A..|
00000140 29 1a d4 a4 e9 d7 0c 04 74 66 04 6d 73 d8 2e 3f |).......tf.ms..?|
00000150 f0 b9 2f 77 bd 07 d7 7c 42 0a |../w...|B.|
0000015a
```
@@ -100,15 +100,14 @@ curl --head http://127.0.0.1:8080
```text
HTTP/1.1 200 OK
Server: nginx/1.25.3
Date: Sun, 29 Oct 2023 01:44:32 GMT
Server: nginx/1.27.4
Date: Sun, 06 Apr 2025 17:17:12 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 24 Oct 2023 13:46:47 GMT
Last-Modified: Wed, 05 Feb 2025 11:06:32 GMT
Connection: keep-alive
ETag: "6537cac7-267"
ETag: "67a34638-267"
Accept-Ranges: bytes
```
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
@@ -132,7 +131,7 @@ kubectl logs $POD_NAME
```text
...
127.0.0.1 - - [01/Nov/2023:06:10:17 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.88.1" "-"
127.0.0.1 - - [06/Apr/2025:17:17:12 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.88.1" "-"
```
### Exec
@@ -146,7 +145,7 @@ kubectl exec -ti $POD_NAME -- nginx -v
```
```text
nginx version: nginx/1.25.3
nginx version: nginx/1.27.4
```
## Services
@@ -169,21 +168,28 @@ NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
```
Retrieve the hostname of the node running the `nginx` pod:
```bash
NODE_NAME=$(kubectl get pods \
-l app=nginx \
-o jsonpath="{.items[0].spec.nodeName}")
```
Make an HTTP request using the IP address and the `nginx` node port:
```bash
curl -I http://node-0:${NODE_PORT}
curl -I http://${NODE_NAME}:${NODE_PORT}
```
```text
HTTP/1.1 200 OK
Server: nginx/1.25.3
Date: Sun, 29 Oct 2023 05:11:15 GMT
Server: nginx/1.27.4
Date: Sun, 06 Apr 2025 17:18:36 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 24 Oct 2023 13:46:47 GMT
Last-Modified: Wed, 05 Feb 2025 11:06:32 GMT
Connection: keep-alive
ETag: "6537cac7-267"
ETag: "67a34638-267"
Accept-Ranges: bytes
```

View File

@@ -4,4 +4,8 @@ In this lab you will delete the compute resources created during this tutorial.
## Compute Instances
Delete the controller and worker compute instances.
Previous versions of this guide made use of GCP resources for various aspects of compute and networking. The current version is agnostic, and all configuration is performed on the `jumpbox`, `server`, or nodes.
Clean up is as simple as deleting all virtual machines you created for this exercise.
Next: [Start Over](../README.md)

11
downloads-amd64.txt Normal file
View File

@@ -0,0 +1,11 @@
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kubectl
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-apiserver
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-controller-manager
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-scheduler
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-proxy
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kubelet
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.32.0/crictl-v1.32.0-linux-amd64.tar.gz
https://github.com/opencontainers/runc/releases/download/v1.3.0-rc.1/runc.amd64
https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz
https://github.com/containerd/containerd/releases/download/v2.1.0-beta.0/containerd-2.1.0-beta.0-linux-amd64.tar.gz
https://github.com/etcd-io/etcd/releases/download/v3.6.0-rc.3/etcd-v3.6.0-rc.3-linux-amd64.tar.gz

11
downloads-arm64.txt Normal file
View File

@@ -0,0 +1,11 @@
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kubectl
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kube-apiserver
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kube-controller-manager
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kube-scheduler
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kube-proxy
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kubelet
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.32.0/crictl-v1.32.0-linux-arm64.tar.gz
https://github.com/opencontainers/runc/releases/download/v1.3.0-rc.1/runc.arm64
https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-arm64-v1.6.2.tgz
https://github.com/containerd/containerd/releases/download/v2.1.0-beta.0/containerd-2.1.0-beta.0-linux-arm64.tar.gz
https://github.com/etcd-io/etcd/releases/download/v3.6.0-rc.3/etcd-v3.6.0-rc.3-linux-arm64.tar.gz

View File

@@ -1,11 +0,0 @@
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kubectl
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-apiserver
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-controller-manager
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-scheduler
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.28.0/crictl-v1.28.0-linux-arm.tar.gz
https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.arm64
https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz
https://github.com/containerd/containerd/releases/download/v1.7.8/containerd-1.7.8-linux-arm64.tar.gz
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-proxy
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kubelet
https://github.com/etcd-io/etcd/releases/download/v3.4.27/etcd-v3.4.27-linux-arm64.tar.gz

View File

@@ -4,7 +4,6 @@ Documentation=https://github.com/etcd-io/etcd
[Service]
Type=notify
Environment="ETCD_UNSUPPORTED_ARCH=arm64"
ExecStart=/usr/local/bin/etcd \
--name controller \
--initial-advertise-peer-urls http://127.0.0.1:2380 \
@@ -19,4 +18,4 @@ Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
WantedBy=multi-user.target

View File

@@ -5,7 +5,6 @@ Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--allow-privileged=true \
--apiserver-count=1 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
@@ -24,7 +23,6 @@ ExecStart=/usr/local/bin/kube-apiserver \
--service-account-key-file=/var/lib/kubernetes/service-accounts.crt \
--service-account-signing-key-file=/var/lib/kubernetes/service-accounts.key \
--service-account-issuer=https://server.kubernetes.local:6443 \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file=/var/lib/kubernetes/kube-api-server.crt \
--tls-private-key-file=/var/lib/kubernetes/kube-api-server.key \
@@ -33,4 +31,4 @@ Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
WantedBy=multi-user.target

View File

@@ -8,10 +8,9 @@ Requires=containerd.service
ExecStart=/usr/local/bin/kubelet \
--config=/var/lib/kubelet/kubelet-config.yaml \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--register-node=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
WantedBy=multi-user.target