support arm64 and amd64

amd64-and-arm64
Kelsey Hightower 2025-04-09 23:08:13 -07:00
parent b2bf9fb2f6
commit b05b6f2fd5
7 changed files with 85 additions and 65 deletions

View File

@ -4,7 +4,7 @@ In this lab you will review the machine requirements necessary to follow this tu
## Virtual or Physical Machines ## Virtual or Physical Machines
This tutorial requires four (4) virtual or physical ARM64 machines running Debian 12 (bookworm). The following table lists the four machines and their CPU, memory, and storage requirements. This tutorial requires four (4) virtual or physical ARM64 or AMD64 machines running Debian 12 (bookworm). The following table lists the four machines and their CPU, memory, and storage requirements.
| Name | Description | CPU | RAM | Storage | | Name | Description | CPU | RAM | Storage |
|---------|------------------------|-----|-------|---------| |---------|------------------------|-----|-------|---------|
@ -13,18 +13,21 @@ This tutorial requires four (4) virtual or physical ARM64 machines running Debia
| node-0 | Kubernetes worker node | 1 | 2GB | 20GB | | node-0 | Kubernetes worker node | 1 | 2GB | 20GB |
| node-1 | Kubernetes worker node | 1 | 2GB | 20GB | | node-1 | Kubernetes worker node | 1 | 2GB | 20GB |
How you provision the machines is up to you, the only requirement is that each machine meet the above system requirements including the machine specs and OS version. Once you have all four machines provisioned, verify the system requirements by running the `uname` command on each machine: How you provision the machines is up to you, the only requirement is that each machine meet the above system requirements including the machine specs and OS version. Once you have all four machines provisioned, verify the OS requirements by viewing the `/etc/os-release` file:
```bash ```bash
uname -mov cat /etc/os-release
``` ```
After running the `uname` command you should see the following output: You should see something similar to the following output:
```text ```text
#1 SMP Debian 6.1.115-1 (2024-11-01) aarch64 GNU/Linux PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
``` ```
You may be surprised to see `aarch64` here, but that is the official name for the Arm Architecture 64-bit instruction set. You will often see `arm64` used by Apple, and the maintainers of the Linux kernel, when referring to support for `aarch64`. This tutorial will use `arm64` consistently throughout to avoid confusion.
Next: [setting-up-the-jumpbox](02-jumpbox.md) Next: [setting-up-the-jumpbox](02-jumpbox.md)

View File

@ -52,20 +52,20 @@ pwd
In this section you will download the binaries for the various Kubernetes components. The binaries will be stored in the `downloads` directory on the `jumpbox`, which will reduce the amount of internet bandwidth required to complete this tutorial as we avoid downloading the binaries multiple times for each machine in our Kubernetes cluster. In this section you will download the binaries for the various Kubernetes components. The binaries will be stored in the `downloads` directory on the `jumpbox`, which will reduce the amount of internet bandwidth required to complete this tutorial as we avoid downloading the binaries multiple times for each machine in our Kubernetes cluster.
The binaries that will be downloaded are listed in the `downloads.txt` file, which you can review using the `cat` command: The binaries that will be downloaded are listed in either the `downloads-amd64.txt` or `downloads-arm64.txt` file depending on your hardware architecture, which you can review using the `cat` command:
```bash ```bash
cat downloads.txt cat downloads-$(dpkg --print-architecture).txt
``` ```
Download the binaries listed in the `downloads.txt` file into a directory called `downloads` using the `wget` command: Download the binaries into a directory called `downloads` using the `wget` command:
```bash ```bash
wget -q --show-progress \ wget -q --show-progress \
--https-only \ --https-only \
--timestamping \ --timestamping \
-P downloads \ -P downloads \
-i downloads.txt -i downloads-$(dpkg --print-architecture).txt
``` ```
Depending on your internet connection speed it may take a while to download over `500` megabytes of binaries, and once the download is complete, you can list them using the `ls` command: Depending on your internet connection speed it may take a while to download over `500` megabytes of binaries, and once the download is complete, you can list them using the `ls` command:
@ -74,19 +74,42 @@ Depending on your internet connection speed it may take a while to download over
ls -oh downloads ls -oh downloads
``` ```
```text Extract the component binaries from the release archives and organize them under the `downloads` directory.
total 544M
-rw-r--r-- 1 root 48M Jan 6 08:13 cni-plugins-linux-arm64-v1.6.2.tgz ```bash
-rw-r--r-- 1 root 34M Mar 17 19:33 containerd-2.1.0-beta.0-linux-arm64.tar.gz {
-rw-r--r-- 1 root 17M Dec 9 01:16 crictl-v1.32.0-linux-arm64.tar.gz ARCH=$(dpkg --print-architecture)
-rw-r--r-- 1 root 21M Mar 27 16:15 etcd-v3.6.0-rc.3-linux-arm64.tar.gz mkdir -p downloads/{client,cni-plugins,controller,worker}
-rw-r--r-- 1 root 87M Mar 11 20:31 kube-apiserver tar -xvf downloads/crictl-v1.32.0-linux-${ARCH}.tar.gz \
-rw-r--r-- 1 root 80M Mar 11 20:31 kube-controller-manager -C downloads/worker/
-rw-r--r-- 1 root 54M Mar 11 20:31 kubectl tar -xvf downloads/containerd-2.1.0-beta.0-linux-${ARCH}.tar.gz \
-rw-r--r-- 1 root 72M Mar 11 20:31 kubelet --strip-components 1 \
-rw-r--r-- 1 root 63M Mar 11 20:31 kube-proxy -C downloads/worker/
-rw-r--r-- 1 root 62M Mar 11 20:31 kube-scheduler tar -xvf downloads/cni-plugins-linux-${ARCH}-v1.6.2.tgz \
-rw-r--r-- 1 root 11M Mar 4 04:14 runc.arm64 -C downloads/cni-plugins/
tar -xvf downloads/etcd-v3.6.0-rc.3-linux-${ARCH}.tar.gz \
-C downloads/ \
--strip-components 1 \
etcd-v3.6.0-rc.3-linux-${ARCH}/etcdctl \
etcd-v3.6.0-rc.3-linux-${ARCH}/etcd
mv downloads/{etcdctl,kubectl} downloads/client/
mv downloads/{etcd,kube-apiserver,kube-controller-manager,kube-scheduler} \
downloads/controller/
mv downloads/{kubelet,kube-proxy} downloads/worker/
mv downloads/runc.${ARCH} downloads/worker/runc
}
```
```bash
rm -rf downloads/*gz
```
Make the binaries executable.
```bash
{
chmod +x downloads/{client,cni-plugins,controller,worker}/*
}
``` ```
### Install kubectl ### Install kubectl
@ -97,8 +120,7 @@ Use the `chmod` command to make the `kubectl` binary executable and move it to t
```bash ```bash
{ {
chmod +x downloads/kubectl cp downloads/client/kubectl /usr/local/bin/
cp downloads/kubectl /usr/local/bin/
} }
``` ```

View File

@ -85,14 +85,14 @@ Once each key is added, verify SSH public key access is working:
```bash ```bash
while read IP FQDN HOST SUBNET; do while read IP FQDN HOST SUBNET; do
ssh -n root@${IP} uname -o -m ssh -n root@${IP} hostname
done < machines.txt done < machines.txt
``` ```
```text ```text
aarch64 GNU/Linux server
aarch64 GNU/Linux node-0
aarch64 GNU/Linux node-1
``` ```
## Hostnames ## Hostnames
@ -195,14 +195,14 @@ At this point you should be able to SSH to each machine listed in the `machines.
```bash ```bash
for host in server node-0 node-1 for host in server node-0 node-1
do ssh root@${host} uname -o -m -n do ssh root@${host} hostname
done done
``` ```
```text ```text
server.kubernetes.local aarch64 GNU/Linux server
node-0.kubernetes.local aarch64 GNU/Linux node-0
node-1.kubernetes.local aarch64 GNU/Linux node-1
``` ```
## Adding `/etc/hosts` Entries To The Remote Machines ## Adding `/etc/hosts` Entries To The Remote Machines

View File

@ -8,7 +8,8 @@ Copy `etcd` binaries and systemd unit files to the `server` machine:
```bash ```bash
scp \ scp \
downloads/etcd-v3.6.0-rc.3-linux-arm64.tar.gz \ downloads/controller/etcd \
downloads/client/etcdctl \
units/etcd.service \ units/etcd.service \
root@server:~/ root@server:~/
``` ```
@ -27,8 +28,7 @@ Extract and install the `etcd` server and the `etcdctl` command line utility:
```bash ```bash
{ {
tar -xvf etcd-v3.6.0-rc.3-linux-arm64.tar.gz mv etcd etcdctl /usr/local/bin/
mv etcd-v3.6.0-rc.3-linux-arm64/etcd* /usr/local/bin/
} }
``` ```

View File

@ -8,10 +8,10 @@ Connect to the `jumpbox` and copy Kubernetes binaries and systemd unit files to
```bash ```bash
scp \ scp \
downloads/kube-apiserver \ downloads/controller/kube-apiserver \
downloads/kube-controller-manager \ downloads/controller/kube-controller-manager \
downloads/kube-scheduler \ downloads/controller/kube-scheduler \
downloads/kubectl \ downloads/client/kubectl \
units/kube-apiserver.service \ units/kube-apiserver.service \
units/kube-controller-manager.service \ units/kube-controller-manager.service \
units/kube-scheduler.service \ units/kube-scheduler.service \
@ -40,10 +40,6 @@ Install the Kubernetes binaries:
```bash ```bash
{ {
chmod +x kube-apiserver \
kube-controller-manager \
kube-scheduler kubectl
mv kube-apiserver \ mv kube-apiserver \
kube-controller-manager \ kube-controller-manager \
kube-scheduler kubectl \ kube-scheduler kubectl \

View File

@ -9,8 +9,8 @@ The commands in this section must be run from the `jumpbox`.
Copy the Kubernetes binaries and systemd unit files to each worker instance: Copy the Kubernetes binaries and systemd unit files to each worker instance:
```bash ```bash
for host in node-0 node-1; do for HOST in node-0 node-1; do
SUBNET=$(grep $host machines.txt | cut -d " " -f 4) SUBNET=$(grep ${HOST} machines.txt | cut -d " " -f 4)
sed "s|SUBNET|$SUBNET|g" \ sed "s|SUBNET|$SUBNET|g" \
configs/10-bridge.conf > 10-bridge.conf configs/10-bridge.conf > 10-bridge.conf
@ -18,27 +18,30 @@ for host in node-0 node-1; do
configs/kubelet-config.yaml > kubelet-config.yaml configs/kubelet-config.yaml > kubelet-config.yaml
scp 10-bridge.conf kubelet-config.yaml \ scp 10-bridge.conf kubelet-config.yaml \
root@$host:~/ root@${HOST}:~/
done done
``` ```
```bash ```bash
for host in node-0 node-1; do for HOST in node-0 node-1; do
scp \ scp \
downloads/runc.arm64 \ downloads/worker/* \
downloads/crictl-v1.32.0-linux-arm64.tar.gz \ downloads/client/kubectl \
downloads/cni-plugins-linux-arm64-v1.6.2.tgz \
downloads/containerd-2.1.0-beta.0-linux-arm64.tar.gz \
downloads/kubectl \
downloads/kubelet \
downloads/kube-proxy \
configs/99-loopback.conf \ configs/99-loopback.conf \
configs/containerd-config.toml \ configs/containerd-config.toml \
configs/kube-proxy-config.yaml \ configs/kube-proxy-config.yaml \
units/containerd.service \ units/containerd.service \
units/kubelet.service \ units/kubelet.service \
units/kube-proxy.service \ units/kube-proxy.service \
root@$host:~/ root@${HOST}:~/
done
```
```bash
for HOST in node-0 node-1; do
scp \
downloads/cni-plugins/* \
root@${HOST}:~/cni-plugins/
done done
``` ```
@ -95,14 +98,10 @@ Install the worker binaries:
```bash ```bash
{ {
mkdir -p containerd mv crictl kube-proxy kubelet runc \
tar -xvf crictl-v1.32.0-linux-arm64.tar.gz /usr/local/bin/
tar -xvf containerd-2.1.0-beta.0-linux-arm64.tar.gz -C containerd mv containerd containerd-shim-runc-v2 containerd-stress /bin/
tar -xvf cni-plugins-linux-arm64-v1.6.2.tgz -C /opt/cni/bin/ mv cni-plugins/* /opt/cni/bin/
mv runc.arm64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
mv containerd/bin/* /bin/
} }
``` ```