pull/865/merge
Mahyar Mirrashed 2025-04-18 16:29:27 -07:00 committed by GitHub
commit c1b56293eb
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 29 additions and 96 deletions

View File

@ -6,6 +6,8 @@ In this lab you will review the machine requirements necessary to follow this tu
This tutorial requires four (4) virtual or physical ARM64 or AMD64 machines running Debian 12 (bookworm). The following table lists the four machines and their CPU, memory, and storage requirements.
The "jumpbox" is from where we will be administering/configuring the Kubernetes cluster.
| Name | Description | CPU | RAM | Storage |
|---------|------------------------|-----|-------|---------|
| jumpbox | Administration host | 1 | 512MB | 10GB |
@ -13,7 +15,12 @@ This tutorial requires four (4) virtual or physical ARM64 or AMD64 machines runn
| node-0 | Kubernetes worker node | 1 | 2GB | 20GB |
| node-1 | Kubernetes worker node | 1 | 2GB | 20GB |
How you provision the machines is up to you, the only requirement is that each machine meet the above system requirements including the machine specs and OS version. Once you have all four machines provisioned, verify the OS requirements by viewing the `/etc/os-release` file:
How you provision the machines is up to you, the only requirement is that each machine meet the above system requirements including the machine specs and OS version.
> [!NOTE]
> You should configure these VMs in headless (no GUI/desktop) mode. Our labs will be performed entirely on the command line.
Once you have all four machines provisioned, verify the OS requirements by viewing the `/etc/os-release` file:
```bash
cat /etc/os-release

View File

@ -214,8 +214,7 @@ Copy the `hosts` file to each machine and append the contents to `/etc/hosts`:
```bash
while read IP FQDN HOST SUBNET; do
scp hosts root@${HOST}:~/
ssh -n \
root@${HOST} "cat hosts >> /etc/hosts"
ssh -n root@${HOST} "cat hosts >> /etc/hosts"
done < machines.txt
```

View File

@ -45,106 +45,39 @@ node-0.kubeconfig
node-1.kubeconfig
```
### The kube-proxy Kubernetes Configuration File
### The Kubernetes Service Configuration Files
Generate a kubeconfig file for the `kube-proxy` service:
Generate a `.kubeconfig` file for the `kube-proxy`, `kube-controller-manager`, and `kube-scheduler` services:
```bash
{
for service in proxy controller-manager scheduler; do
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://server.kubernetes.local:6443 \
--kubeconfig=kube-proxy.kubeconfig
--kubeconfig=kube-${service}.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.crt \
--client-key=kube-proxy.key \
kubectl config set-credentials system:kube-${service} \
--client-certificate=kube-${service}.crt \
--client-key=kube-${service}.key \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
--kubeconfig=kube-${service}.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
--user=system:kube-${service} \
--kubeconfig=kube-${service}.kubeconfig
kubectl config use-context default \
--kubeconfig=kube-proxy.kubeconfig
}
--kubeconfig=kube-${service}.kubeconfig
done
```
Results:
```text
kube-proxy.kubeconfig
```
### The kube-controller-manager Kubernetes Configuration File
Generate a kubeconfig file for the `kube-controller-manager` service:
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://server.kubernetes.local:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.crt \
--client-key=kube-controller-manager.key \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default \
--kubeconfig=kube-controller-manager.kubeconfig
}
```
Results:
```text
kube-controller-manager.kubeconfig
```
### The kube-scheduler Kubernetes Configuration File
Generate a kubeconfig file for the `kube-scheduler` service:
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://server.kubernetes.local:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.crt \
--client-key=kube-scheduler.key \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default \
--kubeconfig=kube-scheduler.kubeconfig
}
```
Results:
```text
kube-scheduler.kubeconfig
```
@ -191,7 +124,7 @@ for host in node-0 node-1; do
ssh root@${host} "mkdir -p /var/lib/{kube-proxy,kubelet}"
scp kube-proxy.kubeconfig \
root@${host}:/var/lib/kube-proxy/kubeconfig \
root@${host}:/var/lib/kube-proxy/kubeconfig
scp ${host}.kubeconfig \
root@${host}:/var/lib/kubelet/kubeconfig

View File

@ -23,22 +23,16 @@ Print the internal IP address and Pod CIDR range for each worker instance:
```
```bash
ssh root@server <<EOF
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
EOF
ssh root@server "ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}"
ssh root@server "ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}"
```
```bash
ssh root@node-0 <<EOF
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
EOF
ssh root@node-0 "ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}"
```
```bash
ssh root@node-1 <<EOF
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
EOF
ssh root@node-1 "ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}"
```
## Verification

View File

@ -56,7 +56,7 @@ Create a deployment for the [nginx](https://nginx.org/en/) web server:
```bash
kubectl create deployment nginx \
--image=nginx:latest
--image=nginx:1.27.4
```
List the pod created by the `nginx` deployment:
@ -72,7 +72,7 @@ nginx-56fcf95486-c8dnx 1/1 Running 0 8s
### Port Forwarding
In this section you will verify the ability to access applications remotely using [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
In this section you will verify the ability to access applications remotely using [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/). If you are familiar with `tmux`, start a Tmux session for this part (install it with `apt-get install -y tmux`).
Retrieve the full name of the `nginx` pod:
@ -92,7 +92,7 @@ Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
```
In a new terminal make an HTTP request using the forwarding address:
In a new terminal/window, make an HTTP request using the forwarding address:
```bash
curl --head http://127.0.0.1:8080