update to kubernetes 1.27.4

This commit is contained in:
Alessandro Lenzen
2023-08-01 15:48:20 +02:00
parent 79a3f79b27
commit af7ffdb8e6
21 changed files with 1181 additions and 1226 deletions

View File

@@ -8,18 +8,31 @@ This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) t
> The compute resources required for this tutorial exceed the Google Cloud Platform free tier.
## Google Cloud Platform SDK
## Google Cloud Command Line Interface (gcloud CLI)
### Install the Google Cloud SDK
### Install the Google Cloud CLI
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
Follow the gcloud CLI [documentation](https://cloud.google.com/cli) to install and configure the `gcloud` command line utility.
Verify the Google Cloud SDK version is 338.0.0 or higher:
Verify the Google Cloud SDK version is 440.0.0 or higher:
```
gcloud version
```
> output
```
Google Cloud SDK 440.0.0
alpha 2023.07.21
beta 2023.07.21
bq 2.0.94
bundled-python3-unix 3.9.16
core 2023.07.21
gcloud-crc32c 1.0.0
gsutil 5.25
```
### Set a Default Compute Region and Zone
This tutorial assumes a default compute region and zone have been configured.
@@ -36,28 +49,31 @@ Then be sure to authorize gcloud to access the Cloud Platform with your Google u
gcloud auth login
```
Next set a default compute region and compute zone:
Next set a default compute region and zone in your local client
```
gcloud config set compute/region us-west1
```
REGION='us-east1'
Set a default compute zone:
ZONE='us-east1-d'
```
gcloud config set compute/zone us-west1-c
gcloud config set compute/region "${REGION}"
gcloud config set compute/zone "${ZONE}"
gcloud compute project-info add-metadata \
--metadata "google-compute-default-region=${REGION},google-compute-default-zone=${ZONE}"
```
> Use the `gcloud compute zones list` command to view additional regions and zones.
## Running Commands in Parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with synchronize-panes enabled to speed up the provisioning process.
[tmux](https://tmux.github.io/) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with synchronize-panes enabled to speed up the provisioning process.
> The use of tmux is optional and not required to complete this tutorial.
![tmux screenshot](images/tmux-screenshot.png)
![tmux screenshot](./images/tmux-screenshot.png)
> Enable synchronize-panes by pressing `ctrl+b` followed by `shift+:`. Next type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
Next: [Installing the Client Tools](02-client-tools.md)
Next: [Installing the Client Tools](./02-client-tools.md)

View File

@@ -1,30 +1,30 @@
# Installing the Client Tools
In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl).
In this lab you will install the command line utilities required to complete this tutorial: [cfssl, cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl).
## Install CFSSL
The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
The `cfssl` and `cfssljson` command line utilities will be used to provision a [public key infrastructure (PKI)](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
Download and install `cfssl` and `cfssljson`:
### OS X
```
curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/darwin/cfssl
curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/darwin/cfssljson
```
ARCH='arm64' # replace arm64 with amd64 if needed
curl --location --output cfssl --time-cond cfssl \
"https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl_1.6.4_darwin_${ARCH}"
curl --location --output cfssljson --time-cond cfssljson \
"https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssljson_1.6.4_linux_${ARCH}"
```
chmod +x cfssl cfssljson
```
```
sudo mv cfssl cfssljson /usr/local/bin/
```
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://github.com/Homebrew/brew) might be a better option:
```
brew install cfssl
@@ -33,22 +33,18 @@ brew install cfssl
### Linux
```
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
```
curl --location --output cfssl --time-cond cfssl \
https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl_1.6.4_linux_amd64
```
chmod +x cfssl cfssljson
```
curl --location --output cfssljson --time-cond cfssljson \
https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssljson_1.6.4_linux_amd64
```
sudo mv cfssl cfssljson /usr/local/bin/
sudo install --mode 0755 cfssl cfssljson /usr/local/bin/
```
### Verification
Verify `cfssl` and `cfssljson` version 1.4.1 or higher is installed:
Verify `cfssl` and `cfssljson` version 1.6.4 or higher is installed:
```
cfssl version
@@ -57,16 +53,19 @@ cfssl version
> output
```
Version: 1.4.1
Runtime: go1.12.12
Version: 1.6.4
Runtime: go1.18
```
```
cfssljson --version
```
> output
```
Version: 1.4.1
Runtime: go1.12.12
Version: 1.6.4
Runtime: go1.18
```
## Install kubectl
@@ -76,43 +75,36 @@ The `kubectl` command line utility is used to interact with the Kubernetes API S
### OS X
```
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/darwin/amd64/kubectl
```
curl --location --remote-name --time-cond kubectl \
"https://dl.k8s.io/release/v1.27.4/bin/darwin/${ARCH}/kubectl"
```
chmod +x kubectl
```
```
sudo mv kubectl /usr/local/bin/
```
### Linux
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
```
curl --location --remote-name --time-cond kubectl \
https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubectl
```
chmod +x kubectl
```
```
sudo mv kubectl /usr/local/bin/
sudo install --mode 0755 kubectl /usr/local/bin/
```
### Verification
Verify `kubectl` version 1.21.0 or higher is installed:
Verify `kubectl` version 1.27.4 or higher is installed:
```
kubectl version --client
kubectl version --client --short
```
> output
```
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Client Version: v1.27.4
Kustomize Version: v5.0.1
```
Next: [Provisioning Compute Resources](03-compute-resources.md)
Next: [Provisioning Compute Resources](./03-compute-resources.md)

View File

@@ -1,18 +1,18 @@
# Provisioning Compute Resources
Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single [compute zone](https://cloud.google.com/compute/docs/regions-zones/regions-zones).
Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single [compute zone](https://cloud.google.com/compute/docs/regions-zones).
> Ensure a default compute zone and region have been set as described in the [Prerequisites](01-prerequisites.md#set-a-default-compute-region-and-zone) lab.
> Ensure a default compute zone and region have been set as described in the [Prerequisites](./01-prerequisites.md#set-a-default-compute-region-and-zone) lab.
## Networking
The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model) assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) can limit how groups of containers are allowed to communicate with each other and external network endpoints.
The Kubernetes [network model](https://kubernetes.io/docs/concepts/services-networking/#the-kubernetes-network-model) assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) can limit how groups of containers are allowed to communicate with each other and external network endpoints.
> Setting up network policies is out of scope for this tutorial.
### Virtual Private Cloud Network
### Virtual Private Cloud (VPC) Network
In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) (VPC) network will be setup to host the Kubernetes cluster.
In this section a dedicated [VPC network](https://cloud.google.com/vpc/docs/vpc) will be setup to host the Kubernetes cluster.
Create the `kubernetes-the-hard-way` custom VPC network:
@@ -20,7 +20,7 @@ Create the `kubernetes-the-hard-way` custom VPC network:
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
```
A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
A [subnet](https://cloud.google.com/vpc/docs/vpc#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
@@ -52,12 +52,12 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
--source-ranges 0.0.0.0/0
```
> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients.
> An [external load balancer](https://cloud.google.com/load-balancing/docs/network) will be used to expose the Kubernetes API Servers to remote clients.
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
gcloud compute firewall-rules list --filter network:kubernetes-the-hard-way
```
> output
@@ -65,7 +65,7 @@ gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
```
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp Fals
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp False
```
### Kubernetes Public IP Address
@@ -73,26 +73,25 @@ kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
```
gcloud compute addresses create kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region)
gcloud compute addresses create kubernetes-the-hard-way
```
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
```
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
gcloud compute addresses list --filter name=kubernetes-the-hard-way
```
> output
```
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
kubernetes-the-hard-way XX.XXX.XXX.XXX EXTERNAL us-west1 RESERVED
kubernetes-the-hard-way XX.XXX.XXX.XXX EXTERNAL us-east1 RESERVED
```
## Compute Instances
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 20.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
The compute instances in this lab will be provisioned using [Ubuntu Server 22.04 LTS](https://ubuntu.com/server), which has good support for the [containerd](https://github.com/containerd/containerd) container runtime. Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
### Kubernetes Controllers
@@ -100,14 +99,14 @@ Create three compute instances which will host the Kubernetes control plane:
```
for i in 0 1 2; do
gcloud compute instances create controller-${i} \
gcloud compute instances create "controller-${i}" \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-2004-lts \
--image-family ubuntu-2204-lts \
--image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \
--private-network-ip 10.240.0.1${i} \
--private-network-ip "10.240.0.1${i}" \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,controller
@@ -124,15 +123,15 @@ Create three compute instances which will host the Kubernetes worker nodes:
```
for i in 0 1 2; do
gcloud compute instances create worker-${i} \
gcloud compute instances create "worker-${i}" \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-2004-lts \
--image-family ubuntu-2204-lts \
--image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \
--metadata pod-cidr=10.200.${i}.0/24 \
--private-network-ip 10.240.0.2${i} \
--metadata "pod-cidr=10.200.${i}.0/24" \
--private-network-ip "10.240.0.2${i}" \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,worker
@@ -144,24 +143,24 @@ done
List the compute instances in your default compute zone:
```
gcloud compute instances list --filter="tags.items=kubernetes-the-hard-way"
gcloud compute instances list --filter tags.items=kubernetes-the-hard-way
```
> output
```
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller-0 us-west1-c e2-standard-2 10.240.0.10 XX.XX.XX.XXX RUNNING
controller-1 us-west1-c e2-standard-2 10.240.0.11 XX.XXX.XXX.XX RUNNING
controller-2 us-west1-c e2-standard-2 10.240.0.12 XX.XXX.XX.XXX RUNNING
worker-0 us-west1-c e2-standard-2 10.240.0.20 XX.XX.XXX.XXX RUNNING
worker-1 us-west1-c e2-standard-2 10.240.0.21 XX.XX.XX.XXX RUNNING
worker-2 us-west1-c e2-standard-2 10.240.0.22 XX.XXX.XX.XX RUNNING
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller-0 us-east1-d e2-standard-2 10.240.0.10 XX.XXX.XX.XXX RUNNING
controller-1 us-east1-d e2-standard-2 10.240.0.11 XX.XXX.XX.XXX RUNNING
controller-2 us-east1-d e2-standard-2 10.240.0.12 XX.XXX.XX.XX RUNNING
worker-0 us-east1-d e2-standard-2 10.240.0.20 XX.XX.XX.XXX RUNNING
worker-1 us-east1-d e2-standard-2 10.240.0.21 XX.XXX.XXX.XXX RUNNING
worker-2 us-east1-d e2-standard-2 10.240.0.22 XX.XXX.XXX.XX RUNNING
```
## Configuring SSH Access
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as described in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as described in the [connecting to Linux VMs](https://cloud.google.com/compute/docs/connect/standard-ssh) documentation.
Test SSH access to the `controller-0` compute instances:
@@ -172,8 +171,8 @@ gcloud compute ssh controller-0
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue:
```
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: The public SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
Generating public/private rsa key pair.
@@ -184,23 +183,23 @@ Enter same passphrase again:
At this point the generated SSH keys will be uploaded and stored in your project:
```
Your identification has been saved in /home/$USER/.ssh/google_compute_engine.
Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub.
Your identification has been saved in "/home/${USER}/.ssh/google_compute_engine"
Your public key has been saved in "/home/${USER}/.ssh/google_compute_engine.pub"
The key fingerprint is:
SHA256:nz1i8jHmgQuGt+WscqP5SeIaSy5wyIJeL71MuV+QruE $USER@$HOSTNAME
SHA256:OvopaMrkGOrbB0u2JMdwDvH6wGQBieKUC+XRAAm07RI "${USER}@${HOSTNAME}"
The key's randomart image is:
+---[RSA 2048]----+
| |
| |
| |
| . |
|o. oS |
|=... .o .o o |
|+.+ =+=.+.X o |
|.+ ==O*B.B = . |
| .+.=EB++ o |
+---[RSA 3072]----+
|O*=o |
|**o.. |
|=E*. |
| Boo |
|+.B. S |
| =.O . |
|..O.+ o |
|*.++.o o |
|=B..ooo |
+----[SHA256]-----+
Updating project ssh metadata...-Updated [https://www.googleapis.com/compute/v1/projects/$PROJECT_ID].
Updating project ssh metadata...Updated ["https://www.googleapis.com/compute/v1/projects/${PROJECT_ID}"].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
```
@@ -208,20 +207,21 @@ Waiting for SSH key to propagate.
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
```
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-1042-gcp x86_64)
Welcome to Ubuntu 22.04.2 LTS (GNU/Linux 5.19.0-1027-gcp x86_64)
...
```
Type `exit` at the prompt to exit the `controller-0` compute instance:
```
$USER@controller-0:~$ exit
exit
```
> output
```
logout
Connection to XX.XX.XX.XXX closed
Connection to XX.XXX.XX.XXX closed.
```
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)
Next: [Provisioning a CA and Generating TLS Certificates](./04-certificate-authority.md)

View File

@@ -1,17 +1,15 @@
# Provisioning a CA and Generating TLS Certificates
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
In this lab you will provision a [public key infrastructure (PKI)](https://en.wikipedia.org/wiki/Public_key_infrastructure) using [CloudFlare's PKI/TLS toolkit](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority (CA), and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
## Certificate Authority
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
In this section you will provision a CA that can be used to generate additional TLS certificates.
Generate the CA configuration file, certificate, and private key:
```
{
cat > ca-config.json <<EOF
cat <<EOF >ca-config.json
{
"signing": {
"default": {
@@ -27,7 +25,7 @@ cat > ca-config.json <<EOF
}
EOF
cat > ca-csr.json <<EOF
cat <<EOF >ca-csr.json
{
"CN": "Kubernetes",
"key": {
@@ -46,9 +44,8 @@ cat > ca-csr.json <<EOF
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
cfssl gencert -initca ca-csr.json \
| cfssljson -bare ca
```
Results:
@@ -67,9 +64,7 @@ In this section you will generate client and server certificates for each Kubern
Generate the `admin` client certificate and private key:
```
{
cat > admin-csr.json <<EOF
cat <<EOF >admin-csr.json
{
"CN": "admin",
"key": {
@@ -89,13 +84,12 @@ cat > admin-csr.json <<EOF
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
}
-ca ca.pem \
-ca-key ca-key.pem \
-config ca-config.json \
-profile kubernetes \
admin-csr.json \
| cfssljson -bare admin
```
Results:
@@ -107,13 +101,13 @@ admin.pem
### The Kubelet Client Certificates
Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/admin/authorization/node/) called Node Authorizer, that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:<nodeName>`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/reference/access-authn-authz/node/) called Node Authorizer, that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:<nodeName>`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
Generate a certificate and private key for each Kubernetes worker node:
```
for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
cat <<EOF >"${instance}-csr.json"
{
"CN": "system:node:${instance}",
"key": {
@@ -132,19 +126,20 @@ cat > ${instance}-csr.json <<EOF
}
EOF
EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
EXTERNAL_IP="$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')"
INTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].networkIP)')
INTERNAL_IP="$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].networkIP)')"
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
-profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance}
cfssl gencert \
-ca ca.pem \
-ca-key ca-key.pem \
-config ca-config.json \
-hostname "${instance},${EXTERNAL_IP},${INTERNAL_IP}" \
-profile kubernetes \
"${instance}-csr.json" \
| cfssljson -bare "${instance}"
done
```
@@ -164,9 +159,7 @@ worker-2.pem
Generate the `kube-controller-manager` client certificate and private key:
```
{
cat > kube-controller-manager-csr.json <<EOF
cat <<EOF >kube-controller-manager-csr.json
{
"CN": "system:kube-controller-manager",
"key": {
@@ -186,13 +179,12 @@ cat > kube-controller-manager-csr.json <<EOF
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
-ca ca.pem \
-ca-key ca-key.pem \
-config ca-config.json \
-profile kubernetes \
kube-controller-manager-csr.json \
| cfssljson -bare kube-controller-manager
```
Results:
@@ -202,15 +194,12 @@ kube-controller-manager-key.pem
kube-controller-manager.pem
```
### The Kube Proxy Client Certificate
Generate the `kube-proxy` client certificate and private key:
```
{
cat > kube-proxy-csr.json <<EOF
cat <<EOF >kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
@@ -230,13 +219,12 @@ cat > kube-proxy-csr.json <<EOF
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
}
-ca ca.pem \
-ca-key ca-key.pem \
-config ca-config.json \
-profile kubernetes \
kube-proxy-csr.json \
| cfssljson -bare kube-proxy
```
Results:
@@ -251,9 +239,7 @@ kube-proxy.pem
Generate the `kube-scheduler` client certificate and private key:
```
{
cat > kube-scheduler-csr.json <<EOF
cat <<EOF >kube-scheduler-csr.json
{
"CN": "system:kube-scheduler",
"key": {
@@ -273,13 +259,12 @@ cat > kube-scheduler-csr.json <<EOF
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}
-ca ca.pem \
-ca-key ca-key.pem \
-config ca-config.json \
-profile kubernetes \
kube-scheduler-csr.json \
| cfssljson -bare kube-scheduler
```
Results:
@@ -289,7 +274,6 @@ kube-scheduler-key.pem
kube-scheduler.pem
```
### The Kubernetes API Server Certificate
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
@@ -297,15 +281,12 @@ The `kubernetes-the-hard-way` static IP address will be included in the list of
Generate the Kubernetes API Server certificate and private key:
```
{
KUBERNETES_PUBLIC_ADDRESS="$(gcloud compute addresses describe kubernetes-the-hard-way \
--format 'value(address)')"
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
KUBERNETES_HOSTNAMES='kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local'
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
cat > kubernetes-csr.json <<EOF
cat <<EOF >kubernetes-csr.json
{
"CN": "kubernetes",
"key": {
@@ -325,14 +306,13 @@ cat > kubernetes-csr.json <<EOF
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
}
-ca ca.pem \
-ca-key ca-key.pem \
-config ca-config.json \
-hostname "10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES}" \
-profile kubernetes \
kubernetes-csr.json \
| cfssljson -bare kubernetes
```
> The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server) lab.
@@ -346,14 +326,12 @@ kubernetes.pem
## The Service Account Key Pair
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/) documentation.
Generate the `service-account` certificate and private key:
```
{
cat > service-account-csr.json <<EOF
cat <<EOF >service-account-csr.json
{
"CN": "service-accounts",
"key": {
@@ -373,13 +351,12 @@ cat > service-account-csr.json <<EOF
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
}
-ca ca.pem \
-ca-key ca-key.pem \
-config ca-config.json \
-profile kubernetes \
service-account-csr.json \
| cfssljson -bare service-account
```
Results:
@@ -389,14 +366,16 @@ service-account-key.pem
service-account.pem
```
## Distribute the Client and Server Certificates
Copy the appropriate certificates and private keys to each worker instance:
```
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
gcloud compute scp \
ca.pem \
"${instance}-key.pem" "${instance}.pem" \
"${instance}:"
done
```
@@ -404,11 +383,14 @@ Copy the appropriate certificates and private keys to each controller instance:
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem ${instance}:~/
gcloud compute scp \
ca-key.pem ca.pem \
kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
"${instance}:"
done
```
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)
Next: [Generating Kubernetes Configuration Files for Authentication](./05-kubernetes-configuration-files.md)

View File

@@ -13,39 +13,39 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
KUBERNETES_PUBLIC_ADDRESS="$(gcloud compute addresses describe kubernetes-the-hard-way \
--format 'value(address)')"
```
### The kubelet Kubernetes Configuration File
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/reference/access-authn-authz/node/).
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](./04-certificate-authority.md) lab.
Generate a kubeconfig file for each worker node:
```
for instance in worker-0 worker-1 worker-2; do
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${instance}.kubeconfig
--certificate-authority ca.pem \
--embed-certs \
--kubeconfig "${instance}.kubeconfig" \
--server "https://${KUBERNETES_PUBLIC_ADDRESS}:6443"
kubectl config set-credentials system:node:${instance} \
--client-certificate=${instance}.pem \
--client-key=${instance}-key.pem \
--embed-certs=true \
--kubeconfig=${instance}.kubeconfig
kubectl config set-credentials "system:node:${instance}" \
--client-certificate "${instance}.pem" \
--client-key "${instance}-key.pem" \
--embed-certs \
--kubeconfig "${instance}.kubeconfig"
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:${instance} \
--kubeconfig=${instance}.kubeconfig
--cluster "kubernetes-the-hard-way" \
--kubeconfig "${instance}.kubeconfig" \
--user "system:node:${instance}"
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
kubectl config use-context default \
--kubeconfig "${instance}.kubeconfig"
done
```
@@ -62,26 +62,25 @@ worker-2.kubeconfig
Generate a kubeconfig file for the `kube-proxy` service:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority ca.pem \
--embed-certs \
--kubeconfig kube-proxy.kubeconfig \
--server "https://${KUBERNETES_PUBLIC_ADDRESS}:6443"
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate kube-proxy.pem \
--client-key kube-proxy-key.pem \
--embed-certs \
--kubeconfig kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster kubernetes-the-hard-way \
--kubeconfig kube-proxy.kubeconfig \
--user system:kube-proxy
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
kubectl config use-context default \
--kubeconfig kube-proxy.kubeconfig
```
Results:
@@ -95,26 +94,25 @@ kube-proxy.kubeconfig
Generate a kubeconfig file for the `kube-controller-manager` service:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority ca.pem \
--embed-certs \
--kubeconfig kube-controller-manager.kubeconfig \
--server https://127.0.0.1:6443
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate kube-controller-manager.pem \
--client-key kube-controller-manager-key.pem \
--embed-certs \
--kubeconfig kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster kubernetes-the-hard-way \
--kubeconfig kube-controller-manager.kubeconfig \
--user system:kube-controller-manager
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
kubectl config use-context default \
--kubeconfig kube-controller-manager.kubeconfig
```
Results:
@@ -129,26 +127,25 @@ kube-controller-manager.kubeconfig
Generate a kubeconfig file for the `kube-scheduler` service:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority ca.pem \
--embed-certs \
--kubeconfig kube-scheduler.kubeconfig \
--server https://127.0.0.1:6443
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate kube-scheduler.pem \
--client-key kube-scheduler-key.pem \
--embed-certs \
--kubeconfig kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster kubernetes-the-hard-way \
--kubeconfig kube-scheduler.kubeconfig \
--user system:kube-scheduler
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
kubectl config use-context default \
--kubeconfig kube-scheduler.kubeconfig
```
Results:
@@ -162,26 +159,25 @@ kube-scheduler.kubeconfig
Generate a kubeconfig file for the `admin` user:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority ca.pem \
--embed-certs \
--kubeconfig admin.kubeconfig \
--server https://127.0.0.1:6443
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate admin.pem \
--client-key admin-key.pem \
--embed-certs \
--kubeconfig admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster kubernetes-the-hard-way \
--kubeconfig admin.kubeconfig \
--user admin
kubectl config use-context default --kubeconfig=admin.kubeconfig
}
kubectl config use-context default \
--kubeconfig admin.kubeconfig
```
Results:
@@ -190,8 +186,7 @@ Results:
admin.kubeconfig
```
##
##
## Distribute the Kubernetes Configuration Files
@@ -199,7 +194,10 @@ Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker
```
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
gcloud compute scp \
"${instance}.kubeconfig" \
kube-proxy.kubeconfig \
"${instance}:"
done
```
@@ -207,8 +205,12 @@ Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig f
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
gcloud compute scp \
admin.kubeconfig \
kube-controller-manager.kubeconfig \
kube-scheduler.kubeconfig \
"${instance}:"
done
```
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
Next: [Generating the Data Encryption Config and Key](./06-data-encryption-keys.md)

View File

@@ -1,6 +1,6 @@
# Generating the Data Encryption Config and Key
Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to [encrypt](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data) cluster data at rest.
Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to [encrypt](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/) cluster data at rest.
In this lab you will generate an encryption key and an [encryption config](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration) suitable for encrypting Kubernetes Secrets.
@@ -9,7 +9,7 @@ In this lab you will generate an encryption key and an [encryption config](https
Generate an encryption key:
```
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
ENCRYPTION_KEY="$(head -c 32 /dev/urandom | base64)"
```
## The Encryption Config File
@@ -17,9 +17,9 @@ ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
Create the `encryption-config.yaml` encryption config file:
```
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
cat <<EOF >encryption-config.yaml
apiVersion: v1
kind: EncryptionConfig
resources:
- resources:
- secrets
@@ -36,8 +36,8 @@ Copy the `encryption-config.yaml` encryption config file to each controller inst
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/
gcloud compute scp encryption-config.yaml "${instance}:"
done
```
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
Next: [Bootstrapping the etcd Cluster](./07-bootstrapping-etcd.md)

View File

@@ -12,7 +12,7 @@ gcloud compute ssh controller-0
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
[tmux](https://tmux.github.io/) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](./01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
## Bootstrapping an etcd Cluster Member
@@ -21,60 +21,61 @@ gcloud compute ssh controller-0
Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project:
```
wget -q --show-progress --https-only --timestamping \
"https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
curl --location --remote-name --time-cond etcd-v3.5.9-linux-amd64.tar.gz \
https://github.com/etcd-io/etcd/releases/download/v3.5.9/etcd-v3.5.9-linux-amd64.tar.gz
```
Extract and install the `etcd` server and the `etcdctl` command line utility:
```
{
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
}
tar --extract --file etcd-v3.5.9-linux-amd64.tar.gz --verbose
sudo cp etcd-v3.5.9-linux-amd64/etcd* /usr/local/bin/
```
### Configure the etcd Server
```
{
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chmod 700 /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
}
sudo mkdir --parents /etc/etcd /var/lib/etcd
sudo chmod 0700 /etc/etcd/ /var/lib/etcd/
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
```
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
INTERNAL_IP="$(curl --silent --header 'Metadata-Flavor: Google' \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)"
```
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
```
ETCD_NAME=$(hostname -s)
ETCD_NAME="$(hostname --short)"
```
Create the `etcd.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
sudo mkdir --parents /usr/local/lib/systemd/system
cat <<EOF | sudo tee /usr/local/lib/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
Documentation=https://github.com/etcd-io/etcd
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--cert-file /etc/etcd/kubernetes.pem \\
--key-file /etc/etcd/kubernetes-key.pem \\
--peer-cert-file /etc/etcd/kubernetes.pem \\
--peer-key-file /etc/etcd/kubernetes-key.pem \\
--trusted-ca-file /etc/etcd/ca.pem \\
--peer-trusted-ca-file /etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
@@ -84,7 +85,7 @@ ExecStart=/usr/local/bin/etcd \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
--data-dir /var/lib/etcd
Restart=on-failure
RestartSec=5
@@ -96,11 +97,7 @@ EOF
### Start the etcd Server
```
{
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
}
sudo systemctl enable --now etcd
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
@@ -111,10 +108,10 @@ List the etcd cluster members:
```
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem
--cacert /etc/etcd/ca.pem \
--cert /etc/etcd/kubernetes.pem \
--endpoints https://127.0.0.1:2379 \
--key /etc/etcd/kubernetes-key.pem
```
> output
@@ -125,4 +122,4 @@ f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.24
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379, false
```
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)
Next: [Bootstrapping the Kubernetes Control Plane](./08-bootstrapping-kubernetes-controllers.md)

View File

@@ -12,14 +12,14 @@ gcloud compute ssh controller-0
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
[tmux](https://tmux.github.io/) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](./01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
## Provision the Kubernetes Control Plane
Create the Kubernetes configuration directory:
```
sudo mkdir -p /etc/kubernetes/config
sudo mkdir --parents /etc/kubernetes/config
```
### Download and Install the Kubernetes Controller Binaries
@@ -27,91 +27,91 @@ sudo mkdir -p /etc/kubernetes/config
Download the official Kubernetes release binaries:
```
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl"
curl --location \
--remote-name --time-cond kube-apiserver \
https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kube-apiserver \
--remote-name --time-cond kube-controller-manager \
https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kube-controller-manager \
--remote-name --time-cond kube-scheduler \
https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kube-scheduler \
--remote-name --time-cond kubectl \
https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubectl
```
Install the Kubernetes binaries:
```
{
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
}
sudo install --mode 0755 kube-apiserver kube-controller-manager \
kube-scheduler kubectl /usr/local/bin/
```
### Configure the Kubernetes API Server
```
{
sudo mkdir -p /var/lib/kubernetes/
sudo mkdir --parents /var/lib/kubernetes
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
sudo cp \
ca-key.pem ca.pem \
kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml \
/var/lib/kubernetes/
```
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
INTERNAL_IP="$(curl --silent --header 'Metadata-Flavor: Google' \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)"
```
REGION=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/project/attributes/google-compute-default-region)
```
REGION="$(curl --silent --header 'Metadata-Flavor: Google' \
http://metadata.google.internal/computeMetadata/v1/project/attributes/google-compute-default-region)"
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $REGION \
--format 'value(address)')
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe \
kubernetes-the-hard-way --region "${REGION}" --format 'value(address)')
```
Create the `kube-apiserver.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
sudo mkdir --parents /usr/local/lib/systemd/system
cat <<EOF | sudo tee /usr/local/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--runtime-config='api/all=true' \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-account-issuer=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
--advertise-address ${INTERNAL_IP} \\
--allow-privileged \\
--apiserver-count 3 \\
--audit-log-maxage 30 \\
--audit-log-maxbackup 3 \\
--audit-log-maxsize 100 \\
--audit-log-path /var/log/audit.log \\
--authorization-mode Node,RBAC \\
--bind-address 0.0.0.0 \\
--client-ca-file /var/lib/kubernetes/ca.pem \\
--enable-admission-plugins NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--etcd-cafile /var/lib/kubernetes/ca.pem \\
--etcd-certfile /var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile /var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
--event-ttl 1h \\
--encryption-provider-config /var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority /var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate /var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key /var/lib/kubernetes/kubernetes-key.pem \\
--runtime-config 'api/all=true' \\
--service-account-key-file /var/lib/kubernetes/service-account.pem \\
--service-account-signing-key-file /var/lib/kubernetes/service-account-key.pem \\
--service-account-issuer https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
--service-cluster-ip-range 10.32.0.0/24 \\
--service-node-port-range 30000-32767 \\
--tls-cert-file /var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file /var/lib/kubernetes/kubernetes-key.pem \\
--v 2
Restart=on-failure
RestartSec=5
@@ -122,34 +122,34 @@ EOF
### Configure the Kubernetes Controller Manager
Move the `kube-controller-manager` kubeconfig into place:
Copy the `kube-controller-manager` kubeconfig into place:
```
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
sudo cp kube-controller-manager.kubeconfig /var/lib/kubernetes/
```
Create the `kube-controller-manager.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
cat <<EOF | sudo tee /usr/local/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--bind-address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
--bind-address 0.0.0.0 \\
--cluster-cidr 10.200.0.0/16 \\
--cluster-name kubernetes \\
--cluster-signing-cert-file /var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file /var/lib/kubernetes/ca-key.pem \\
--kubeconfig /var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect \\
--root-ca-file /var/lib/kubernetes/ca.pem \\
--service-account-private-key-file /var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range 10.32.0.0/24 \\
--use-service-account-credentials \\
--v 2
Restart=on-failure
RestartSec=5
@@ -160,20 +160,20 @@ EOF
### Configure the Kubernetes Scheduler
Move the `kube-scheduler` kubeconfig into place:
Copy the `kube-scheduler` kubeconfig into place:
```
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
sudo cp kube-scheduler.kubeconfig /var/lib/kubernetes/
```
Create the `kube-scheduler.yaml` configuration file:
```
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1beta1
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
kubeconfig: /var/lib/kubernetes/kube-scheduler.kubeconfig
leaderElection:
leaderElect: true
EOF
@@ -182,15 +182,15 @@ EOF
Create the `kube-scheduler.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
cat <<EOF | sudo tee /usr/local/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
--config /etc/kubernetes/config/kube-scheduler.yaml \\
--v 2
Restart=on-failure
RestartSec=5
@@ -202,30 +202,23 @@ EOF
### Start the Controller Services
```
{
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
}
sudo systemctl enable --now kube-apiserver kube-controller-manager kube-scheduler
```
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
### Enable HTTP Health Checks
A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
A [Google Network Load Balancer](https://cloud.google.com/load-balancing/docs/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
> The `/healthz` API server endpoint does not require authentication by default.
Install a basic web server to handle HTTP health checks:
```
sudo apt-get update
sudo apt-get install -y nginx
```
sudo apt-get install --yes nginx
```
cat > kubernetes.default.svc.cluster.local <<EOF
cat <<EOF | sudo tee /etc/nginx/sites-available/kubernetes.default.svc.cluster.local
server {
listen 80;
server_name kubernetes.default.svc.cluster.local;
@@ -236,31 +229,22 @@ server {
}
}
EOF
```
```
{
sudo mv kubernetes.default.svc.cluster.local \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
sudo ln --symbolic \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local \
/etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
}
```
```
sudo systemctl restart nginx
```
```
sudo systemctl enable nginx
```
### Verification
```
kubectl cluster-info --kubeconfig admin.kubeconfig
```
> output
```
Kubernetes control plane is running at https://127.0.0.1:6443
```
@@ -268,20 +252,24 @@ Kubernetes control plane is running at https://127.0.0.1:6443
Test the nginx HTTP health check proxy:
```
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
curl --header 'Host: kubernetes.default.svc.cluster.local' --include \
http://127.0.0.1/healthz
```
> output
```
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Sun, 02 May 2021 04:19:29 GMT
Date: Wed, 26 Jul 2023 13:35:08 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
Audit-Id: d87ab78c-776b-42f9-950c-42c7b6060e7f
Cache-Control: no-cache, private
X-Content-Type-Options: nosniff
X-Kubernetes-Pf-Flowschema-Uid: c43f32eb-e038-457f-9474-571d43e5c325
X-Kubernetes-Pf-Prioritylevel-Uid: 8ba5908f-5569-4330-80fd-c643e7512366
X-Kubernetes-Pf-Flowschema-Uid: bb5f446a-26d9-4f6e-a18f-d40546253482
X-Kubernetes-Pf-Prioritylevel-Uid: 34a0ffbd-2fd0-44b8-b7ab-d9c883cabb34
ok
```
@@ -292,7 +280,7 @@ ok
In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#checking-api-access) API to determine authorization.
The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
@@ -300,10 +288,10 @@ The commands in this section will effect the entire cluster and only need to be
gcloud compute ssh controller-0
```
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
```
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig --filename -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
@@ -331,7 +319,7 @@ The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
```
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig --filename -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
@@ -354,39 +342,34 @@ In this section you will provision an external load balancer to front the Kubern
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
### Provision a Network Load Balancer
Create the external load balancer network resources:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
KUBERNETES_PUBLIC_ADDRESS="$(gcloud compute addresses describe kubernetes-the-hard-way \
--format 'value(address)')"
gcloud compute http-health-checks create kubernetes \
--description "Kubernetes Health Check" \
--host "kubernetes.default.svc.cluster.local" \
--request-path "/healthz"
gcloud compute http-health-checks create kubernetes \
--description 'Kubernetes Health Check' \
--host kubernetes.default.svc.cluster.local \
--request-path /healthz
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
--allow tcp
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--allow tcp \
--network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
}
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address "${KUBERNETES_PUBLIC_ADDRESS}" \
--ports 6443 \
--target-pool kubernetes-target-pool
```
### Verification
@@ -396,15 +379,14 @@ Create the external load balancer network resources:
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
KUBERNETES_PUBLIC_ADDRESS="$(gcloud compute addresses describe kubernetes-the-hard-way \
--format 'value(address)')"
```
Make a HTTP request for the Kubernetes version info:
```
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
curl --cacert ca.pem "https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version"
```
> output
@@ -412,15 +394,15 @@ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
```
{
"major": "1",
"minor": "21",
"gitVersion": "v1.21.0",
"gitCommit": "cb303e613a121a29364f75cc67d3d580833a7479",
"minor": "27",
"gitVersion": "v1.27.4",
"gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
"gitTreeState": "clean",
"buildDate": "2021-04-08T16:25:06Z",
"goVersion": "go1.16.1",
"buildDate": "2023-07-19T12:14:49Z",
"goVersion": "go1.20.6",
"compiler": "gc",
"platform": "linux/amd64"
}
```
Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md)
Next: [Bootstrapping the Kubernetes Worker Nodes](./09-bootstrapping-kubernetes-workers.md)

View File

@@ -1,6 +1,6 @@
# Bootstrapping the Kubernetes Worker Nodes
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [containerd](https://github.com/containerd/containerd), [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/plugins), [crictl](https://github.com/kubernetes-sigs/cri-tools), [kube-proxy](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/), [kubectl](https://kubernetes.io/docs/reference/kubectl/), and [kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/).
## Prerequisites
@@ -12,17 +12,16 @@ gcloud compute ssh worker-0
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
[tmux](https://tmux.github.io/) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](./01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
## Provisioning a Kubernetes Worker Node
Install the OS dependencies:
Install the OS dependencies ([conntrack](https://conntrack-tools.netfilter.org/), [ipset](https://ipset.netfilter.org/), and [socat](socat)):
```
{
sudo apt-get update
sudo apt-get -y install socat conntrack ipset
}
sudo apt-get update
sudo apt-get --yes install conntrack ipset socat
```
> The socat binary enables support for the `kubectl port-forward` command.
@@ -37,10 +36,10 @@ Verify if swap is enabled:
sudo swapon --show
```
If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
If output is empty then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
```
sudo swapoff -a
sudo swapoff --all
```
> To ensure swap remains off after reboot consult your Linux distro documentation.
@@ -48,24 +47,33 @@ sudo swapoff -a
### Download and Install Worker Binaries
```
wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz \
https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
curl --location \
--remote-name --time-cond containerd-1.7.3-linux-amd64.tar.gz \
https://github.com/containerd/containerd/releases/download/v1.7.3/containerd-1.7.3-linux-amd64.tar.gz \
--remote-name --time-cond containerd.service \
https://raw.githubusercontent.com/containerd/containerd/v1.7.3/containerd.service \
--output runc --time-cond runc \
https://github.com/opencontainers/runc/releases/download/v1.1.8/runc.amd64 \
--remote-name --time-cond cni-plugins-linux-amd64-v1.3.0.tgz \
https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz \
--remote-name --time-cond crictl-v1.27.1-linux-amd64.tar.gz \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.1/crictl-v1.27.1-linux-amd64.tar.gz \
--remote-name --time-cond kube-proxy \
https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kube-proxy \
--remote-name --time-cond kubectl \
https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubectl \
--remote-name --time-cond kubelet \
https://dl.k8s.io/release/v1.27.4/bin/linux/amd64/kubelet
```
Create the installation directories:
```
sudo mkdir -p \
sudo mkdir --parents \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubelet \
/var/lib/kubernetes \
/var/run/kubernetes
```
@@ -73,16 +81,21 @@ sudo mkdir -p \
Install the worker binaries:
```
{
mkdir containerd
tar -xvf crictl-v1.21.0-linux-amd64.tar.gz
tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
sudo tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
sudo mv runc.amd64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
sudo mv containerd/bin/* /bin/
}
sudo tar --directory /usr/local/ --extract \
--file containerd-1.7.3-linux-amd64.tar.gz --gunzip --verbose
sudo mkdir --parents /usr/local/lib/systemd/system
sudo cp containerd.service /usr/local/lib/systemd/system/
sudo install --mode 0755 runc /usr/local/sbin/
tar --extract --file crictl-v1.27.1-linux-amd64.tar.gz --gunzip --verbose
sudo tar --directory /opt/cni/bin/ --extract \
--file cni-plugins-linux-amd64-v1.3.0.tgz --gunzip --verbose
sudo install --mode 0755 crictl kube-proxy kubectl kubelet /usr/local/bin/
```
### Configure CNI Networking
@@ -90,40 +103,42 @@ Install the worker binaries:
Retrieve the Pod CIDR range for the current compute instance:
```
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
POD_CIDR="$(curl --silent --header 'Metadata-Flavor: Google' \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)"
```
Create the `bridge` network configuration file:
Create the CNI config file:
```
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
cat << EOF | sudo tee /etc/cni/net.d/10-containerd-net.conflist
{
"cniVersion": "0.4.0",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "${POD_CIDR}"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
}
EOF
```
Create the `loopback` network configuration file:
```
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.4.0",
"name": "lo",
"type": "loopback"
"cniVersion": "1.0.0",
"name": "containerd-net",
"plugins": [
{
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"promiscMode": true,
"ipam": {
"type": "host-local",
"ranges": [
[{
"subnet": "${POD_CIDR}"
}]
],
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"externalSetMarkChain": "KUBE-MARK-MASQ"
}
]
}
EOF
```
@@ -133,55 +148,19 @@ EOF
Create the `containerd` configuration file:
```
sudo mkdir -p /etc/containerd/
```
sudo mkdir --parents /etc/containerd
```
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
EOF
```
Create the `containerd.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
containerd config default | sudo tee /etc/containerd/config.toml
```
### Configure the Kubelet
```
{
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.pem /var/lib/kubernetes/
}
sudo cp "${HOSTNAME}-key.pem" "${HOSTNAME}.pem" /var/lib/kubelet/
sudo cp "${HOSTNAME}.kubeconfig" /var/lib/kubelet/kubeconfig
sudo cp ca.pem /var/lib/kubernetes/
```
Create the `kubelet-config.yaml` configuration file:
@@ -210,12 +189,12 @@ tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
```
> The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`.
> The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`.
Create the `kubelet.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
cat <<EOF | sudo tee /usr/local/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
@@ -224,14 +203,11 @@ Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true \\
--v=2
--config /var/lib/kubelet/kubelet-config.yaml \\
--container-runtime-endpoint unix:///var/run/containerd/containerd.sock \\
--kubeconfig /var/lib/kubelet/kubeconfig \\
--register-node \\
--v 2
Restart=on-failure
RestartSec=5
@@ -243,7 +219,7 @@ EOF
### Configure the Kubernetes Proxy
```
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
sudo cp kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
```
Create the `kube-proxy-config.yaml` configuration file:
@@ -262,14 +238,14 @@ EOF
Create the `kube-proxy.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
cat <<EOF | sudo tee /usr/local/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
--config /var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
@@ -281,11 +257,7 @@ EOF
### Start the Worker Services
```
{
sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy
}
sudo systemctl enable --now containerd kubelet kube-proxy
```
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
@@ -298,16 +270,16 @@ List the registered Kubernetes nodes:
```
gcloud compute ssh controller-0 \
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
--command 'kubectl get nodes --kubeconfig admin.kubeconfig'
```
> output
```
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 22s v1.21.0
worker-1 Ready <none> 22s v1.21.0
worker-2 Ready <none> 22s v1.21.0
worker-0 Ready <none> 37s v1.27.4
worker-1 Ready <none> 37s v1.27.4
worker-2 Ready <none> 37s v1.27.4
```
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
Next: [Configuring kubectl for Remote Access](./10-configuring-kubectl.md)

View File

@@ -11,26 +11,23 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
Generate a kubeconfig file suitable for authenticating as the `admin` user:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
KUBERNETES_PUBLIC_ADDRESS="$(gcloud compute addresses describe kubernetes-the-hard-way \
--format 'value(address)')"
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority ca.pem \
--embed-certs \
--server "https://${KUBERNETES_PUBLIC_ADDRESS}:6443"
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
kubectl config set-credentials admin \
--client-certificate admin.pem \
--client-key admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config set-context kubernetes-the-hard-way \
--cluster kubernetes-the-hard-way \
--user admin
kubectl config use-context kubernetes-the-hard-way
}
kubectl config use-context kubernetes-the-hard-way
```
## Verification
@@ -38,14 +35,15 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
Check the version of the remote Kubernetes cluster:
```
kubectl version
kubectl version --short
```
> output
```
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Client Version: v1.27.4
Kustomize Version: v5.0.1
Server Version: v1.27.4
```
List the nodes in the remote Kubernetes cluster:
@@ -58,9 +56,9 @@ kubectl get nodes
```
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 2m35s v1.21.0
worker-1 Ready <none> 2m35s v1.21.0
worker-2 Ready <none> 2m35s v1.21.0
worker-0 Ready <none> 5m38s v1.27.4
worker-1 Ready <none> 5m38s v1.27.4
worker-2 Ready <none> 5m38s v1.27.4
```
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
Next: [Provisioning Pod Network Routes](./11-pod-network-routes.md)

View File

@@ -1,10 +1,10 @@
# Provisioning Pod Network Routes
Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network [routes](https://cloud.google.com/compute/docs/vpc/routes).
Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network [routes](https://cloud.google.com/vpc/docs/routes).
In this lab you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address.
> There are [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this) to implement the Kubernetes networking model.
> There are [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-network-model) to implement the Kubernetes networking model.
## The Routing Table
@@ -14,7 +14,7 @@ Print the internal IP address and Pod CIDR range for each worker instance:
```
for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \
gcloud compute instances describe "${instance}" \
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
done
```
@@ -33,17 +33,17 @@ Create network routes for each worker instance:
```
for i in 0 1 2; do
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
gcloud compute routes create "kubernetes-route-10-200-${i}-0-24" \
--destination-range "10.200.${i}.0/24" \
--network kubernetes-the-hard-way \
--next-hop-address 10.240.0.2${i} \
--destination-range 10.200.${i}.0/24
--next-hop-address "10.240.0.2${i}"
done
```
List the routes in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute routes list --filter "network: kubernetes-the-hard-way"
gcloud compute routes list --filter 'network: kubernetes-the-hard-way'
```
> output
@@ -57,4 +57,4 @@ kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
```
Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)
Next: [Deploying the DNS Cluster Add-on](./12-dns-addon.md)

View File

@@ -1,13 +1,13 @@
# Deploying the DNS Cluster Add-on
In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery, backed by [CoreDNS](https://coredns.io/), to applications running inside the Kubernetes cluster.
In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery, backed by [CoreDNS](https://github.com/coredns/coredns), to applications running inside the Kubernetes cluster.
## The DNS Cluster Add-on
Deploy the `coredns` cluster add-on:
```
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
kubectl apply --filename ./manifests/coredns-1.10.1.yaml
```
> output
@@ -17,14 +17,14 @@ serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
deployment.apps/coredns created
```
List the pods created by the `kube-dns` deployment:
```
kubectl get pods -l k8s-app=kube-dns -n kube-system
kubectl get pods --namespace kube-system --selector k8s-app=kube-dns
```
> output
@@ -37,16 +37,16 @@ coredns-8494f9c688-zqrj2 1/1 Running 0 10s
## Verification
Create a `busybox` deployment:
Create a `busybox` pod:
```
kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
kubectl run busybox --image busybox:1.36.1 --command -- sleep infinity
```
List the pod created by the `busybox` deployment:
List the pod created:
```
kubectl get pods -l run=busybox
kubectl get pods --selector run=busybox
```
> output
@@ -59,13 +59,14 @@ busybox 1/1 Running 0 3s
Retrieve the full name of the `busybox` pod:
```
POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
POD_NAME=$(kubectl get pods --selector run=busybox \
--output jsonpath="{.items[0].metadata.name}")
```
Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
```
kubectl exec -ti $POD_NAME -- nslookup kubernetes
kubectl exec --stdin --tty "${POD_NAME}" -- nslookup kubernetes
```
> output
@@ -78,4 +79,10 @@ Name: kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
```
Next: [Smoke Test](13-smoke-test.md)
Delete the `busybox` pod:
```
kubectl delete pod "${POD_NAME}"
```
Next: [Smoke Test](./13-smoke-test.md)

View File

@@ -10,19 +10,20 @@ Create a generic secret:
```
kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata"
--from-literal 'mykey=mydata'
```
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
```
gcloud compute ssh controller-0 \
--command "sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem\
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
--command 'sudo ETCDCTL_API=3 etcdctl get \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--endpoints=https://127.0.0.1:2379 \
--key=/etc/etcd/kubernetes-key.pem\
/registry/secrets/default/kubernetes-the-hard-way \
| hexdump -C'
```
> output
@@ -59,16 +60,16 @@ The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates t
In this section you will verify the ability to create and manage [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
Create a deployment for the [nginx](https://nginx.org/en/) web server:
Create a deployment for the [nginx](https://nginx.org/) web server:
```
kubectl create deployment nginx --image=nginx
kubectl create deployment nginx --image nginx
```
List the pod created by the `nginx` deployment:
```
kubectl get pods -l app=nginx
kubectl get pods --selector app=nginx
```
> output
@@ -85,13 +86,13 @@ In this section you will verify the ability to access applications remotely usin
Retrieve the full name of the `nginx` pod:
```
POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
POD_NAME="$(kubectl get pods --selector app=nginx --output jsonpath='{.items[0].metadata.name}')"
```
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
```
kubectl port-forward $POD_NAME 8080:80
kubectl port-forward "${POD_NAME}" 8080:80
```
> output
@@ -111,13 +112,13 @@ curl --head http://127.0.0.1:8080
```
HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Sun, 02 May 2021 05:29:25 GMT
Server: nginx/1.25.1
Date: Mon, 31 Jul 2023 11:17:53 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Content-Length: 615
Last-Modified: Tue, 13 Jun 2023 15:08:10 GMT
Connection: keep-alive
ETag: "6075b537-264"
ETag: "6488865a-267"
Accept-Ranges: bytes
```
@@ -137,30 +138,30 @@ In this section you will verify the ability to [retrieve container logs](https:/
Print the `nginx` pod logs:
```
kubectl logs $POD_NAME
kubectl logs "${POD_NAME}"
```
> output
```
...
127.0.0.1 - - [02/May/2021:05:29:25 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-"
127.0.0.1 - - [31/Jul/2023:11:15:02 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/8.0.1" "-"
```
### Exec
In this section you will verify the ability to [execute commands in a container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container).
In this section you will verify the ability to [execute commands in a container](https://kubernetes.io/docs/tasks/debug/debug-application/get-shell-running-container/#running-individual-commands-in-a-container).
Print the nginx version by executing the `nginx -v` command in the `nginx` container:
```
kubectl exec -ti $POD_NAME -- nginx -v
kubectl exec --stdin --tty $POD_"${POD_NAME}" -- nginx -v
```
> output
```
nginx version: nginx/1.19.10
nginx version: nginx/1.25.1
```
## Services
@@ -173,48 +174,48 @@ Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/conc
kubectl expose deployment nginx --port 80 --type NodePort
```
> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/concepts/architecture/cloud-controller/). Setting up cloud provider integration is out of scope for this tutorial.
Retrieve the node port assigned to the `nginx` service:
```
NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
NODE_PORT="$(kubectl get svc nginx \
--output jsonpath='{range .spec.ports[0]}{.nodePort}')"
```
Create a firewall rule that allows remote access to the `nginx` node port:
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \
--allow "tcp:${NODE_PORT}" \
--network kubernetes-the-hard-way
```
Retrieve the external IP address of a worker instance:
```
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
EXTERNAL_IP="$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')"
```
Make an HTTP request using the external IP address and the `nginx` node port:
```
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
curl --head "http://${EXTERNAL_IP}:${NODE_PORT}"
```
> output
```
HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Sun, 02 May 2021 05:31:52 GMT
Server: nginx/1.25.1
Date: Mon, 31 Jul 2023 11:24:03 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Content-Length: 615
Last-Modified: Tue, 13 Jun 2023 15:08:10 GMT
Connection: keep-alive
ETag: "6075b537-264"
ETag: "6488865a-267"
Accept-Ranges: bytes
```
Next: [Cleaning Up](14-cleanup.md)
Next: [Cleaning Up](./14-cleanup.md)

View File

@@ -1,16 +1,16 @@
# Cleaning Up
In this lab you will delete the compute resources created during this tutorial.
In this lab you will delete the compute resources and optionally the files and configurations created during this tutorial.
## Compute Instances
Delete the controller and worker compute instances:
```
gcloud -q compute instances delete \
gcloud compute instances delete \
controller-0 controller-1 controller-2 \
worker-0 worker-1 worker-2 \
--zone $(gcloud config get-value compute/zone)
--quiet
```
## Networking
@@ -18,46 +18,56 @@ gcloud -q compute instances delete \
Delete the external load balancer network resources:
```
{
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region)
gcloud compute forwarding-rules delete kubernetes-forwarding-rule --quiet
gcloud -q compute target-pools delete kubernetes-target-pool
gcloud compute target-pools delete kubernetes-target-pool --quiet
gcloud -q compute http-health-checks delete kubernetes
gcloud compute http-health-checks delete kubernetes --quiet
gcloud -q compute addresses delete kubernetes-the-hard-way
}
gcloud compute addresses delete kubernetes-the-hard-way --quiet
```
Delete the `kubernetes-the-hard-way` firewall rules:
```
gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \
gcloud compute firewall-rules delete \
kubernetes-the-hard-way-allow-external \
kubernetes-the-hard-way-allow-health-check
kubernetes-the-hard-way-allow-health-check \
kubernetes-the-hard-way-allow-internal \
kubernetes-the-hard-way-allow-nginx-service \
--quiet
```
Delete the `kubernetes-the-hard-way` network VPC:
```
{
gcloud -q compute routes delete \
gcloud compute routes delete \
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
kubernetes-route-10-200-2-0-24 \
--quiet
gcloud -q compute networks subnets delete kubernetes
gcloud compute networks subnets delete kubernetes --quiet
gcloud -q compute networks delete kubernetes-the-hard-way
}
gcloud compute networks delete kubernetes-the-hard-way --quiet
```
Delete the `kubernetes-the-hard-way` compute address:
## Cleanup The Admin Kubernetes Configuration File
```
gcloud -q compute addresses delete kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region)
kubectl config unset current-context
kubectl config delete-context kubernetes-the-hard-way
kubectl config delete-user admin
kubectl config delete-cluster kubernetes-the-hard-way
```
## Cleanup the Client Tools
```
sudo rm -i /usr/local/bin/cfssl \
/usr/local/bin/cfssljson \
/usr/local/bin/kubectl
```