From 398f5a73c0909952b5748804f0be6f7904b8b017 Mon Sep 17 00:00:00 2001 From: Khalifah Shabazz <619281+b01@users.noreply.github.com> Date: Sun, 1 Jun 2025 23:37:55 -0400 Subject: [PATCH] chg: Hostnames In Documentation Continued Updated more references to old hostnames in the documentation to reflect the new naming convention. --- docs/02-jumpbox.md | 17 +++++-- docs/04-certificate-authority.md | 6 +-- docs/05-kubernetes-configuration-files.md | 21 ++++++--- docs/06-data-encryption-keys.md | 17 +++++-- docs/07-bootstrapping-etcd.md | 21 ++++++--- ...08-bootstrapping-kubernetes-controllers.md | 18 ++++---- docs/09-bootstrapping-kubernetes-workers.md | 40 ++++++++++++----- docs/11-pod-network-routes.md | 23 ++++++---- docs/12-smoke-test.md | 45 +++++++++++++------ docs/13-cleanup.md | 9 +++- docs/resources.md | 7 ++- units/etcd.service | 2 +- units/kube-apiserver.service | 8 ++-- 13 files changed, 160 insertions(+), 74 deletions(-) diff --git a/docs/02-jumpbox.md b/docs/02-jumpbox.md index 09d3da6..a82de01 100644 --- a/docs/02-jumpbox.md +++ b/docs/02-jumpbox.md @@ -1,8 +1,17 @@ # Set Up The Jumpbox -In this lab you will set up one of the four machines to be a `jumpbox`. This machine will be used to run commands throughout this tutorial. While a dedicated machine is being used to ensure consistency, these commands can also be run from just about any machine including your personal workstation running macOS or Linux. +In this lab you will set up one of the four machines to be a `jumpbox`. This +machine will be used to run commands throughout this tutorial. While a dedicated +machine is being used to ensure consistency, these commands can also be run from +just about any machine including your personal workstation running macOS or +Linux. -Think of the `jumpbox` as the administration machine that you will use as a home base when setting up your Kubernetes cluster from the ground up. Before we get started we need to install a few command line utilities and clone the Kubernetes The Hard Way git repository, which contains some additional configuration files that will be used to configure various Kubernetes components throughout this tutorial. +Think of the `jumpbox` as the administration machine that you will use as a +home base when setting up your Kubernetes cluster from the ground up. Before +we get started we need to install a few command line utilities and clone the +Kubernetes The Hard Way git repository, which contains some additional +configuration files that will be used to configure various Kubernetes +components throughout this tutorial. Log in to the `jumpbox`: @@ -10,7 +19,9 @@ Log in to the `jumpbox`: ssh root@jumpbox ``` -All commands will be run as the `root` user. This is being done for the sake of convenience, and will help reduce the number of commands required to set everything up. +All commands will be run as the `root` user. This is being done for the sake +of convenience, and will help reduce the number of commands required to set +everything up. ### Install Command Line Utilities diff --git a/docs/04-certificate-authority.md b/docs/04-certificate-authority.md index dbc5977..6fc193e 100644 --- a/docs/04-certificate-authority.md +++ b/docs/04-certificate-authority.md @@ -90,7 +90,7 @@ Copy the appropriate certificates and private keys to the `node01` and `node02` ```bash for host in node01 node02; do - ssh root@${host} mkdir /var/lib/kubelet/ + ssh root@${host} mkdir -p /var/lib/kubelet/ scp ca.crt root@${host}:/var/lib/kubelet/ @@ -107,9 +107,9 @@ Copy the appropriate certificates and private keys to the `controlplane` machine ```bash scp \ ca.key ca.crt \ - kube-api-controlplane.key kube-api-controlplane.crt \ + kube-apiserver.key kube-apiserver.crt \ service-accounts.key service-accounts.crt \ - root@server:~/ + root@controlplane:~/ ``` > The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab. diff --git a/docs/05-kubernetes-configuration-files.md b/docs/05-kubernetes-configuration-files.md index 844059b..0b431bd 100644 --- a/docs/05-kubernetes-configuration-files.md +++ b/docs/05-kubernetes-configuration-files.md @@ -1,16 +1,23 @@ # Generating Kubernetes Configuration Files for Authentication -In this lab you will generate [Kubernetes client configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), typically called kubeconfigs, which configure Kubernetes clients to connect and authenticate to Kubernetes API Servers. +In this lab you will generate [Kubernetes client configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), +typically called kubeconfigs, which configure Kubernetes clients to connect +and authenticate to Kubernetes API Servers. ## Client Authentication Configs -In this section you will generate kubeconfig files for the `kubelet` and the `admin` user. +In this section you will generate kubeconfig files for the `kubelet` and the +`admin` user. ### The kubelet Kubernetes Configuration File -When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/reference/access-authn-authz/node/). +When generating kubeconfig files for Kubelets the client certificate matching +the Kubelet's node name must be used. This will ensure Kubelets are properly +authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/reference/access-authn-authz/node/). -> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab. +> The following commands must be run in the same directory used to generate +> the SSL certificates during the +> [Generating TLS Certificates](04-certificate-authority.md) lab. Generate a kubeconfig file for the `node01` and `node02` worker nodes: @@ -191,20 +198,20 @@ for host in node01 node02; do ssh root@${host} "mkdir -p /var/lib/{kube-proxy,kubelet}" scp kube-proxy.kubeconfig \ - root@${host}:/var/lib/kube-proxy/kubeconfig \ + root@${host}:/var/lib/kube-proxy/kubeconfig scp ${host}.kubeconfig \ root@${host}:/var/lib/kubelet/kubeconfig done ``` -Copy the `kube-controller-manager` and `kube-scheduler` kubeconfig files to the `server` machine: +Copy the `kube-controller-manager` and `kube-scheduler` kubeconfig files to the `controlplane` machine: ```bash scp admin.kubeconfig \ kube-controller-manager.kubeconfig \ kube-scheduler.kubeconfig \ - root@server:~/ + root@controlplane:~/ ``` Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md) diff --git a/docs/06-data-encryption-keys.md b/docs/06-data-encryption-keys.md index be613a0..915cd52 100644 --- a/docs/06-data-encryption-keys.md +++ b/docs/06-data-encryption-keys.md @@ -1,8 +1,11 @@ # Generating the Data Encryption Config and Key -Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to [encrypt](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data) cluster data at rest. +Kubernetes stores a variety of data including cluster state, application +configurations, and secrets. Kubernetes supports the ability to [encrypt] +cluster data at rest. -In this lab you will generate an encryption key and an [encryption config](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration) suitable for encrypting Kubernetes Secrets. +In this lab you will generate an encryption key and an [encryption config] +suitable for encrypting Kubernetes Secrets. ## The Encryption Key @@ -21,10 +24,16 @@ envsubst < configs/encryption-config.yaml \ > encryption-config.yaml ``` -Copy the `encryption-config.yaml` encryption config file to each controller instance: +Copy the `encryption-config.yaml` encryption config file to each controller +instance: ```bash -scp encryption-config.yaml root@server:~/ +scp encryption-config.yaml root@controlplane:~/ ``` Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md) + +--- + +[encrypt]: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data +[encryption config]: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration \ No newline at end of file diff --git a/docs/07-bootstrapping-etcd.md b/docs/07-bootstrapping-etcd.md index 46b3c24..f8ed5aa 100644 --- a/docs/07-bootstrapping-etcd.md +++ b/docs/07-bootstrapping-etcd.md @@ -1,23 +1,25 @@ # Bootstrapping the etcd Cluster -Kubernetes components are stateless and store cluster state in [etcd](https://github.com/etcd-io/etcd). In this lab you will bootstrap a single node etcd cluster. +Kubernetes components are stateless and store cluster state in [etcd]. In this +lab you will bootstrap a single node etcd cluster. ## Prerequisites -Copy `etcd` binaries and systemd unit files to the `server` machine: +Copy `etcd` binaries and systemd unit files to the `controlplane` machine: ```bash scp \ downloads/controller/etcd \ downloads/client/etcdctl \ units/etcd.service \ - root@server:~/ + root@controlplane:~/ ``` -The commands in this lab must be run on the `server` machine. Login to the `server` machine using the `ssh` command. Example: +The commands in this lab must be run on the `controlplane` machine. Login to +the `controlplane` machine using the `ssh` command. Example: ```bash -ssh root@server +ssh root@controlplane ``` ## Bootstrapping an etcd Cluster @@ -38,12 +40,13 @@ Extract and install the `etcd` server and the `etcdctl` command line utility: { mkdir -p /etc/etcd /var/lib/etcd chmod 700 /var/lib/etcd - cp ca.crt kube-api-server.key kube-api-server.crt \ + cp ca.crt kube-apiserver.key kube-apiserver.crt \ /etc/etcd/ } ``` -Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance: +Each etcd member must have a unique name within an etcd cluster. Set the etcd +name to match the hostname of the current compute instance: Create the `etcd.service` systemd unit file: @@ -74,3 +77,7 @@ etcdctl member list ``` Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md) + +--- + +[etcd]: https://github.com/etcd-io/etcd \ No newline at end of file diff --git a/docs/08-bootstrapping-kubernetes-controllers.md b/docs/08-bootstrapping-kubernetes-controllers.md index ed3608b..b0385d5 100644 --- a/docs/08-bootstrapping-kubernetes-controllers.md +++ b/docs/08-bootstrapping-kubernetes-controllers.md @@ -1,10 +1,10 @@ # Bootstrapping the Kubernetes Control Plane -In this lab you will bootstrap the Kubernetes control plane. The following components will be installed on the `server` machine: Kubernetes API Server, Scheduler, and Controller Manager. +In this lab you will bootstrap the Kubernetes control plane. The following components will be installed on the `controlplane` machine: Kubernetes API Server, Scheduler, and Controller Manager. ## Prerequisites -Connect to the `jumpbox` and copy Kubernetes binaries and systemd unit files to the `server` machine: +Connect to the `jumpbox` and copy Kubernetes binaries and systemd unit files to the `controlplane` machine: ```bash scp \ @@ -17,13 +17,13 @@ scp \ units/kube-scheduler.service \ configs/kube-scheduler.yaml \ configs/kube-apiserver-to-kubelet.yaml \ - root@server:~/ + root@controlplane:~/ ``` -The commands in this lab must be run on the `server` machine. Login to the `server` machine using the `ssh` command. Example: +The commands in this lab must be run on the `controlplane` machine. Login to the `controlplane` machine using the `ssh` command. Example: ```bash -ssh root@server +ssh root@controlplane ``` ## Provision the Kubernetes Control Plane @@ -54,7 +54,7 @@ Install the Kubernetes binaries: mkdir -p /var/lib/kubernetes/ mv ca.crt ca.key \ - kube-api-server.key kube-api-server.crt \ + kube-apiserver.key kube-apiserver.crt \ service-accounts.key service-accounts.crt \ encryption-config.yaml \ /var/lib/kubernetes/ @@ -155,10 +155,10 @@ In this section you will configure RBAC permissions to allow the Kubernetes API > This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#checking-api-access) API to determine authorization. -The commands in this section will affect the entire cluster and only need to be run on the `server` machine. +The commands in this section will affect the entire cluster and only need to be run on the `controlplane` machine. ```bash -ssh root@server +ssh root@controlplane ``` Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: @@ -172,7 +172,7 @@ kubectl apply -f kube-apiserver-to-kubelet.yaml \ At this point the Kubernetes control plane is up and running. Run the following commands from the `jumpbox` machine to verify it's working: -Make a HTTP request for the Kubernetes version info: +Make an HTTP request for the Kubernetes version info: ```bash curl --cacert ca.crt \ diff --git a/docs/09-bootstrapping-kubernetes-workers.md b/docs/09-bootstrapping-kubernetes-workers.md index 619aeca..4e12035 100644 --- a/docs/09-bootstrapping-kubernetes-workers.md +++ b/docs/09-bootstrapping-kubernetes-workers.md @@ -1,6 +1,8 @@ # Bootstrapping the Kubernetes Worker Nodes -In this lab you will bootstrap two Kubernetes worker nodes. The following components will be installed: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies). +In this lab you will bootstrap two Kubernetes worker nodes. The following +components will be installed: [runc], [container networking plugins], +[containerd], [kubelet], and [kube-proxy]. ## Prerequisites @@ -39,13 +41,14 @@ done ```bash for HOST in node01 node02; do - scp \ - downloads/cni-plugins/* \ + scp -r \ + downloads/cni-plugins/ \ root@${HOST}:~/cni-plugins/ done ``` -The commands in the next section must be run on each worker instance: `node01`, `node02`. Login to the worker instance using the `ssh` command. Example: +The commands in the next section must be run on each worker instance: `node01`, +`node02`. Login to the worker instance using the `ssh` command. Example: ```bash ssh root@node01 @@ -66,7 +69,9 @@ Install the OS dependencies: Disable Swap -Kubernetes has limited support for the use of swap memory, as it is difficult to provide guarantees and account for pod memory utilization when swap is involved. +Kubernetes has limited support for the use of swap memory, as it is difficult +to provide guarantees and account for pod memory utilization when swap is +involved. Verify if swap is disabled: @@ -74,13 +79,15 @@ Verify if swap is disabled: swapon --show ``` -If output is empty then swap is disabled. If swap is enabled run the following command to disable swap immediately: +If output is empty then swap is disabled. If swap is enabled run the following +command to disable swap immediately: ```bash swapoff -a ``` -> To ensure swap remains off after reboot consult your Linux distro documentation. +> To ensure swap remains off after reboot consult your Linux distro +> documentation. Create the installation directories: @@ -98,9 +105,9 @@ Install the worker binaries: ```bash { - mv crictl kube-proxy kubelet runc \ - /usr/local/bin/ - mv containerd containerd-shim-runc-v2 containerd-stress /bin/ + mv crictl kube-proxy kubelet /usr/local/bin/ + mv runc /usr/local/sbin/ + mv containerd ctr containerd-shim-runc-v2 containerd-stress /bin/ mv cni-plugins/* /opt/cni/bin/ } ``` @@ -113,7 +120,8 @@ Create the `bridge` network configuration file: mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/ ``` -To ensure network traffic crossing the CNI `bridge` network is processed by `iptables`, load and configure the `br-netfilter` kernel module: +To ensure network traffic crossing the CNI `bridge` network is processed by +`iptables`, load and configure the `br-netfilter` kernel module: ```bash { @@ -193,7 +201,7 @@ Run the following commands from the `jumpbox` machine. List the registered Kubernetes nodes: ```bash -ssh root@server \ +ssh root@controlplane \ "kubectl get nodes \ --kubeconfig admin.kubeconfig" ``` @@ -205,3 +213,11 @@ node02 Ready 10s v1.32.3 ``` Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md) + +--- + +[runc]: https://github.com/opencontainers/runc +[container networking plugins]: https://github.com/containernetworking/cni +[containerd]: https://github.com/containerd/containerd +[kubelet]: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet +[kube-proxy]: https://kubernetes.io/docs/concepts/cluster-administration/proxies diff --git a/docs/11-pod-network-routes.md b/docs/11-pod-network-routes.md index cadb2b6..5091cf7 100644 --- a/docs/11-pod-network-routes.md +++ b/docs/11-pod-network-routes.md @@ -1,20 +1,23 @@ # Provisioning Pod Network Routes -Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network [routes](https://cloud.google.com/compute/docs/vpc/routes). +Pods scheduled to a node receive an IP address from the node's Pod CIDR range. +At this point pods can not communicate with other pods running on different +nodes due to missing network [routes]. -In this lab you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address. +In this lab you will create a route for each worker node that maps the node's +Pod CIDR range to the node's internal IP address. -> There are [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this) to implement the Kubernetes networking model. +> There are [other ways] to implement the Kubernetes networking model. ## The Routing Table -In this section you will gather the information required to create routes in the `kubernetes-the-hard-way` VPC network. +In this section you will gather the information required to create routes in +the `kubernetes-the-hard-way` VPC network. Print the internal IP address and Pod CIDR range for each worker instance: ```bash { - SERVER_IP=$(grep server machines.txt | cut -d " " -f 1) NODE_0_IP=$(grep node01 machines.txt | cut -d " " -f 1) NODE_0_SUBNET=$(grep node01 machines.txt | cut -d " " -f 4) NODE_1_IP=$(grep node02 machines.txt | cut -d " " -f 1) @@ -23,7 +26,7 @@ Print the internal IP address and Pod CIDR range for each worker instance: ``` ```bash -ssh root@server < The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial. +> The LoadBalancer service type can not be used because your cluster is not +> configured with [cloud provider integration]. Setting up cloud provider +> integration is out of scope for this tutorial. Retrieve the node port assigned to the `nginx` service: @@ -205,3 +213,14 @@ Accept-Ranges: bytes ``` Next: [Cleaning Up](13-cleanup.md) + +--- + +[encrypt secret data at rest]: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted +[Deployments]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ +[nginx]: https://nginx.org/en/ +[retrieve container logs]: https://kubernetes.io/docs/concepts/cluster-administration/logging/ +[execute commands in a container]: https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container +[Service]: https://kubernetes.io/docs/concepts/services-networking/service/ +[NodePort]: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport +[cloud provider integration]: https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider diff --git a/docs/13-cleanup.md b/docs/13-cleanup.md index 93634d3..3544b3e 100644 --- a/docs/13-cleanup.md +++ b/docs/13-cleanup.md @@ -4,8 +4,13 @@ In this lab you will delete the compute resources created during this tutorial. ## Compute Instances -Previous versions of this guide made use of GCP resources for various aspects of compute and networking. The current version is agnostic, and all configuration is performed on the `jumpbox`, `server`, or nodes. +Previous versions of this guide made use of GCP resources for various aspects +of compute and networking. The current version is agnostic, and all +configuration is performed on the `jumpbox`, `controlplane`, or nodes. -Clean up is as simple as deleting all virtual machines you created for this exercise. +Clean up is as simple as deleting all virtual machines you created for this +exercise. +If you used the virtual-machines provided, then cd into the `virtual-machines` +directory and run `vagrant destroy`. Next: [Start Over](../README.md) diff --git a/docs/resources.md b/docs/resources.md index f3ca7cf..135289c 100644 --- a/docs/resources.md +++ b/docs/resources.md @@ -6,9 +6,14 @@ Quick access to information to help you when you run into trouble. * [Installing containerd] * [Generate Certificates Manually with OpenSSL] * [Running Kubelet in Standalone Mode] +* [Using RBAC Authorization] +* [Using Node Authorization] + --- [Install and configure prerequisites]: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#install-and-configure-prerequisites [Installing containerd]: https://github.com/containerd/containerd/blob/main/docs/getting-started.md#installing-containerd [Running Kubelet in Standalone Mode]: https://v1-32.docs.kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/ -[Generate Certificates Manually with OpenSSL]: https://v1-32.docs.kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl \ No newline at end of file +[Generate Certificates Manually with OpenSSL]: https://v1-32.docs.kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl +[Using RBAC Authorization]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ +[Using Node Authorization]: https://kubernetes.io/docs/reference/access-authn-authz/node/ diff --git a/units/etcd.service b/units/etcd.service index de88548..4deb190 100644 --- a/units/etcd.service +++ b/units/etcd.service @@ -5,7 +5,7 @@ Documentation=https://github.com/etcd-io/etcd [Service] Type=notify ExecStart=/usr/local/bin/etcd \ - --name controller \ + --name controlplane \ --initial-advertise-peer-urls http://127.0.0.1:2380 \ --listen-peer-urls http://127.0.0.1:2380 \ --listen-client-urls http://127.0.0.1:2379 \ diff --git a/units/kube-apiserver.service b/units/kube-apiserver.service index 4abf83d..1e60b12 100644 --- a/units/kube-apiserver.service +++ b/units/kube-apiserver.service @@ -17,15 +17,15 @@ ExecStart=/usr/local/bin/kube-apiserver \ --event-ttl=1h \ --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \ --kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \ - --kubelet-client-certificate=/var/lib/kubernetes/kube-api-server.crt \ - --kubelet-client-key=/var/lib/kubernetes/kube-api-server.key \ + --kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \ + --kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \ --runtime-config='api/all=true' \ --service-account-key-file=/var/lib/kubernetes/service-accounts.crt \ --service-account-signing-key-file=/var/lib/kubernetes/service-accounts.key \ --service-account-issuer=https://controlplane.kubernetes.local:6443 \ --service-node-port-range=30000-32767 \ - --tls-cert-file=/var/lib/kubernetes/kube-api-server.crt \ - --tls-private-key-file=/var/lib/kubernetes/kube-api-server.key \ + --tls-cert-file=/var/lib/kubernetes/kube-apiserver.crt \ + --tls-private-key-file=/var/lib/kubernetes/kube-apiserver.key \ --v=2 Restart=on-failure RestartSec=5