chg: Hostnames In Documentation Continued

Updated command that require sudo when running as vagrant user.
pull/882/head
Khalifah Shabazz 2025-06-03 11:40:47 -04:00
parent fe76f494fb
commit c494223545
No known key found for this signature in database
GPG Key ID: 762A588BFB5A40ED
4 changed files with 96 additions and 76 deletions

View File

@ -16,11 +16,8 @@ for HOST in node01 node02; do
sed "s|SUBNET|$SUBNET|g" \
configs/10-bridge.conf > 10-bridge.conf
sed "s|SUBNET|$SUBNET|g" \
configs/kubelet-config.yaml > kubelet-config.yaml
scp 10-bridge.conf kubelet-config.yaml \
root@${HOST}:~/
scp 10-bridge.conf configs/kubelet-config.yaml \
vagrant@${HOST}:~/
done
```
@ -35,7 +32,8 @@ for HOST in node01 node02; do
units/containerd.service \
units/kubelet.service \
units/kube-proxy.service \
root@${HOST}:~/
downloads/cni-plugins/ \
vagrant@${HOST}:~/
done
```
@ -43,7 +41,7 @@ done
for HOST in node01 node02; do
scp -r \
downloads/cni-plugins/ \
root@${HOST}:~/cni-plugins/
vagrant@${HOST}:~/cni-plugins/
done
```
@ -51,7 +49,7 @@ The commands in the next section must be run on each worker instance: `node01`,
`node02`. Login to the worker instance using the `ssh` command. Example:
```bash
ssh root@node01
ssh vagrant@node01
```
## Provisioning a Kubernetes Worker Node
@ -60,8 +58,8 @@ Install the OS dependencies:
```bash
{
apt-get update
apt-get -y install socat conntrack ipset kmod
sudo apt-get update
sudo apt-get -y install socat conntrack ipset kmod
}
```
@ -92,7 +90,7 @@ swapoff -a
Create the installation directories:
```bash
mkdir -p \
sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
@ -105,10 +103,10 @@ Install the worker binaries:
```bash
{
mv crictl kube-proxy kubelet /usr/local/bin/
mv runc /usr/local/sbin/
mv containerd ctr containerd-shim-runc-v2 containerd-stress /bin/
mv cni-plugins/* /opt/cni/bin/
sudo mv crictl kube-proxy kubelet kubectl /usr/local/bin/
sudo mv runc /usr/local/sbin/
sudo mv containerd ctr containerd-shim-runc-v2 containerd-stress /bin/
sudo mv cni-plugins/* /opt/cni/bin/
}
```
@ -117,7 +115,7 @@ Install the worker binaries:
Create the `bridge` network configuration file:
```bash
mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
```
To ensure network traffic crossing the CNI `bridge` network is processed by
@ -125,18 +123,16 @@ To ensure network traffic crossing the CNI `bridge` network is processed by
```bash
{
modprobe br-netfilter
echo "br-netfilter" >> /etc/modules-load.d/modules.conf
sudo modprobe br-netfilter
echo "br-netfilter" | sudo tee -a /etc/modules-load.d/modules.conf
}
```
```bash
{
echo "net.bridge.bridge-nf-call-iptables = 1" \
>> /etc/sysctl.d/kubernetes.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" \
>> /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
echo "net.bridge.bridge-nf-call-iptables = 1" | sudo tee -a /etc/sysctl.d/kubernetes.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" | sudo tee -a /etc/sysctl.d/kubernetes.conf
sudo sysctl -p /etc/sysctl.d/kubernetes.conf
}
```
@ -146,9 +142,9 @@ Install the `containerd` configuration files:
```bash
{
mkdir -p /etc/containerd/
mv containerd-config.toml /etc/containerd/config.toml
mv containerd.service /etc/systemd/system/
sudo mkdir -p /etc/containerd/
sudo mv containerd-config.toml /etc/containerd/config.toml
sudo mv containerd.service /etc/systemd/system/
}
```
@ -158,8 +154,8 @@ Create the `kubelet-config.yaml` configuration file:
```bash
{
mv kubelet-config.yaml /var/lib/kubelet/
mv kubelet.service /etc/systemd/system/
sudo mv kubelet-config.yaml /var/lib/kubelet/
sudo mv kubelet.service /etc/systemd/system/
}
```
@ -167,8 +163,8 @@ Create the `kubelet-config.yaml` configuration file:
```bash
{
mv kube-proxy-config.yaml /var/lib/kube-proxy/
mv kube-proxy.service /etc/systemd/system/
sudo mv kube-proxy-config.yaml /var/lib/kube-proxy/
sudo mv kube-proxy.service /etc/systemd/system/
}
```
@ -176,23 +172,38 @@ Create the `kubelet-config.yaml` configuration file:
```bash
{
systemctl daemon-reload
systemctl enable containerd kubelet kube-proxy
systemctl start containerd kubelet kube-proxy
sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy
}
```
Check if the kubelet service is running:
```bash
systemctl is-active kubelet
sudo systemctl status kubelet
```
```text
active
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2025-06-03 15:36:24 UTC; 28s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 5645 (kubelet)
Tasks: 10 (limit: 1102)
Memory: 27.8M
CPU: 501ms
CGroup: /system.slice/kubelet.service
└─5645 /usr/local/bin/kubelet --config=/var/lib/kubelet/kubelet-config.yaml --kubeconfig=/var/lib/kubelet/kubeconfig --v=2
Jun 03 15:36:24 node02 kubelet[5645]: I0603 15:36:24.878735 5645 kubelet_node_status.go:687] "Recording event message for node" node="node02" event="NodeHasNoDiskPressure"
Jun 03 15:36:24 node02 kubelet[5645]: I0603 15:36:24.878809 5645 kubelet_node_status.go:687] "Recording event message for node" node="node02" event="NodeHasSufficientPID"
Jun 03 15:36:24 node02 kubelet[5645]: I0603 15:36:24.878879 5645 kubelet_node_status.go:75] "Attempting to register node" node="node02"
Jun 03 15:36:24 node02 kubelet[5645]: I0603 15:36:24.886841 5645 kubelet_node_status.go:78] "Successfully registered node" node="node02"
```
Be sure to complete the steps in this section on each worker node, `node01` and `node02`, before moving on to the next section.
Be sure to complete the steps in this section on each worker node, `node01`
and `node02`, before moving on to the next section.
## Verification
@ -201,15 +212,15 @@ Run the following commands from the `jumpbox` machine.
List the registered Kubernetes nodes:
```bash
ssh root@controlplane \
ssh vagrant@controlplane \
"kubectl get nodes \
--kubeconfig admin.kubeconfig"
```
```
NAME STATUS ROLES AGE VERSION
node01 Ready <none> 1m v1.33.1
node02 Ready <none> 10s v1.33.1
NAME STATUS ROLES AGE VERSION
node01 Ready <none> 2m5s v1.33.1
node02 Ready <none> 2m12s v1.33.1
```
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)

View File

@ -1,6 +1,7 @@
# Configuring kubectl for Remote Access
In this lab you will generate a kubeconfig file for the `kubectl` command line utility based on the `admin` user credentials.
In this lab you will generate a kubeconfig file for the `kubectl` command line
utility based on the `admin` user credentials.
> Run the commands in this lab from the `jumpbox` machine.
@ -8,7 +9,8 @@ In this lab you will generate a kubeconfig file for the `kubectl` command line u
Each kubeconfig requires a Kubernetes API Server to connect to.
You should be able to ping `controlplane.kubernetes.local` based on the `/etc/hosts` DNS entry from a previous lab.
You should be able to ping `controlplane.kubernetes.local` based on the
`/etc/hosts` DNS entry from a previous lab.
```bash
curl --cacert ca.crt \
@ -49,7 +51,9 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
kubectl config use-context kubernetes-the-hard-way
}
```
The results of running the command above should create a kubeconfig file in the default location `~/.kube/config` used by the `kubectl` commandline tool. This also means you can run the `kubectl` command without specifying a config.
The results of running the command above should create a kubeconfig file in
the default location `~/.kube/config` used by the `kubectl` commandline tool.
This also means you can run the `kubectl` command without specifying a config.
## Verification
@ -62,7 +66,7 @@ kubectl version
```text
Client Version: v1.33.1
Kustomize Version: v5.5.0
Kustomize Version: v5.6.0
Server Version: v1.33.1
```
@ -73,9 +77,9 @@ kubectl get nodes
```
```
NAME STATUS ROLES AGE VERSION
node01 Ready <none> 10m v1.33.1
node02 Ready <none> 10m v1.33.1
NAME STATUS ROLES AGE VERSION
node01 Ready <none> 15m v1.33.1
node02 Ready <none> 15m v1.33.1
```
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)

View File

@ -26,28 +26,28 @@ Print the internal IP address and Pod CIDR range for each worker instance:
```
```bash
ssh root@controlplane <<EOF
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
ssh vagrant@controlplane <<EOF
sudo ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
sudo ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
EOF
```
```bash
ssh root@node01 <<EOF
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
ssh vagrant@node01 <<EOF
sudo ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
EOF
```
```bash
ssh root@node02 <<EOF
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
ssh vagrant@node02 <<EOF
sudo ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
EOF
```
## Verification
```bash
ssh root@controlplane ip route
ssh vagrant@controlplane ip route
```
```text
@ -58,7 +58,7 @@ XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
```
```bash
ssh root@node01 ip route
ssh vagrant@node01 ip route
```
```text
@ -68,7 +68,7 @@ XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
```
```bash
ssh root@node02 ip route
ssh vagrant@node02 ip route
```
```text

View File

@ -1,5 +1,10 @@
# Smoke Test
In this lab you will complete a series of tasks to ensure your Kubernetes
cluster is functioning correctly. These commands should be run from the
`jumpbox`.
## Add kubectl Alias
So you can just type `k` in place of `kubectl` for running Kubernetes commands.
@ -11,9 +16,6 @@ get an error that it is an unknown command. Then run:
echo "alias k='kubectl'" | tee -a ~/.bashrc && source ~/.bashrc
```
In this lab you will complete a series of tasks to ensure your Kubernetes
cluster is functioning correctly.
## Data Encryption
In this section you will verify the ability to [encrypt secret data at rest].
@ -28,7 +30,7 @@ kubectl create secret generic kubernetes-the-hard-way \
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
```bash
ssh root@controlplane \
ssh vagrant@controlplane \
'etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C'
```
@ -69,14 +71,14 @@ In this section you will verify the ability to create and manage [Deployments].
Create a deployment for the [nginx] web server:
```bash
kubectl create deployment nginx \
k create deployment nginx \
--image=nginx:latest
```
List the pod created by the `nginx` deployment:
```bash
kubectl get pods -l app=nginx
k get pods -l app=nginx
```
```bash
@ -86,7 +88,8 @@ nginx-56fcf95486-c8dnx 1/1 Running 0 8s
### Port Forwarding
In this section you will verify the ability to access applications remotely using [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
In this section you will verify the ability to access applications remotely
using [port forwarding].
Retrieve the full name of the `nginx` pod:
@ -98,7 +101,7 @@ POD_NAME=$(kubectl get pods -l app=nginx \
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
```bash
kubectl port-forward $POD_NAME 8080:80
k port-forward $POD_NAME 8080:80
```
```text
@ -114,13 +117,13 @@ curl --head http://127.0.0.1:8080
```text
HTTP/1.1 200 OK
Server: nginx/1.27.4
Date: Sun, 06 Apr 2025 17:17:12 GMT
Server: nginx/1.27.5
Date: Tue, 03 Jun 2025 16:02:14 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 05 Feb 2025 11:06:32 GMT
Last-Modified: Wed, 16 Apr 2025 12:01:11 GMT
Connection: keep-alive
ETag: "67a34638-267"
ETag: "67ff9c07-267"
Accept-Ranges: bytes
```
@ -140,7 +143,7 @@ In this section you will verify the ability to [retrieve container logs].
Print the `nginx` pod logs:
```bash
kubectl logs $POD_NAME
k logs $POD_NAME
```
```text
@ -157,7 +160,7 @@ Print the nginx version by executing the `nginx -v` command in the `nginx`
container:
```bash
kubectl exec -ti $POD_NAME -- nginx -v
k exec -ti $POD_NAME -- nginx -v
```
```text
@ -172,7 +175,7 @@ In this section you will verify the ability to expose applications using a
Expose the `nginx` deployment using a [NodePort] service:
```bash
kubectl expose deployment nginx \
k expose deployment nginx \
--port 80 --type NodePort
```
@ -202,13 +205,14 @@ curl -I http://${NODE_NAME}:${NODE_PORT}
```
```text
Server: nginx/1.27.4
Date: Sun, 06 Apr 2025 17:18:36 GMT
HTTP/1.1 200 OK
Server: nginx/1.27.5
Date: Tue, 03 Jun 2025 16:06:33 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 05 Feb 2025 11:06:32 GMT
Last-Modified: Wed, 16 Apr 2025 12:01:11 GMT
Connection: keep-alive
ETag: "67a34638-267"
ETag: "67ff9c07-267"
Accept-Ranges: bytes
```
@ -224,3 +228,4 @@ Next: [Cleaning Up](13-cleanup.md)
[Service]: https://kubernetes.io/docs/concepts/services-networking/service/
[NodePort]: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
[cloud provider integration]: https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider
[port forwarding]: https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/