UPDATED docs/* to be more compliant with markdown linting.

pull/507/head
David J Eddy 2019-11-15 12:12:32 -05:00
parent 5c462220b7
commit 0b24e6bb0e
15 changed files with 335 additions and 449 deletions

55
.gitignore vendored
View File

@ -1,50 +1,7 @@
admin-csr.json
admin-key.pem
admin.csr
admin.pem
admin.kubeconfig
ca-config.json
ca-csr.json
ca-key.pem
ca.csr
ca.pem
encryption-config.yaml
kube-controller-manager-csr.json
kube-controller-manager-key.pem
kube-controller-manager.csr
kube-controller-manager.kubeconfig
kube-controller-manager.pem
kube-scheduler-csr.json
kube-scheduler-key.pem
kube-scheduler.csr
kube-scheduler.kubeconfig
kube-scheduler.pem
kube-proxy-csr.json
kube-proxy-key.pem
kube-proxy.csr
kube-proxy.kubeconfig
kube-proxy.pem
kubernetes-csr.json
kubernetes-key.pem
kubernetes.csr
kubernetes.pem
worker-0-csr.json
worker-0-key.pem
worker-0.csr
worker-0.kubeconfig
worker-0.pem
worker-1-csr.json
worker-1-key.pem
worker-1.csr
worker-1.kubeconfig
worker-1.pem
worker-2-csr.json
worker-2-key.pem
worker-2.csr
worker-2.kubeconfig
worker-2.pem
service-account-key.pem
service-account.csr
service-account.pem
service-account-csr.json
*.swp
*.csr
*.pem
*.json
*.kubeconfig
encryption-config.yaml

View File

@ -16,7 +16,7 @@ Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to in
Verify the Google Cloud SDK version is 262.0.0 or higher:
```
```sh
gcloud version
```
@ -26,25 +26,25 @@ This tutorial assumes a default compute region and zone have been configured.
If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this:
```
```sh
gcloud init
```
Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials:
```
```sh
gcloud auth login
```
Next set a default compute region and compute zone:
```
```sh
gcloud config set compute/region us-west1
```
Set a default compute zone:
```
```sh
gcloud config set compute/zone us-west1-c
```

View File

@ -2,7 +2,6 @@
In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl).
## Install CFSSL
The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
@ -11,61 +10,62 @@ Download and install `cfssl` and `cfssljson`:
### OS X
```
```sh
curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssl
curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssljson
```
```
```sh
chmod +x cfssl cfssljson
```
```
```sh
sudo mv cfssl cfssljson /usr/local/bin/
```
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
```
```sh
brew install cfssl
```
### Linux
```
```sh
wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssl \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssljson
```
```
```sh
chmod +x cfssl cfssljson
```
```
```sh
sudo mv cfssl cfssljson /usr/local/bin/
```
### Verification
### cfssl Verification
Verify `cfssl` and `cfssljson` version 1.3.4 or higher is installed:
```
```sh
cfssl version
```
> output
```
```sh
Version: 1.3.4
Revision: dev
Runtime: go1.13
```
```
```sh
cfssljson --version
```
```
```sh
Version: 1.3.4
Revision: dev
Runtime: go1.13
@ -75,45 +75,45 @@ Runtime: go1.13
The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
### OS X
### kubectl on OS X
```
```sh
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/darwin/amd64/kubectl
```
```
```sh
chmod +x kubectl
```
```
```sh
sudo mv kubectl /usr/local/bin/
```
### Linux
### kubectl on Linux
```
```sh
wget https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl
```
```
```sh
chmod +x kubectl
```
```
```sh
sudo mv kubectl /usr/local/bin/
```
### Verification
### kubectl Verification
Verify `kubectl` version 1.15.3 or higher is installed:
```
```sh
kubectl version --client
```
> output
```
```sh
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
```

View File

@ -16,7 +16,7 @@ In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/com
Create the `kubernetes-the-hard-way` custom VPC network:
```
```sh
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
```
@ -24,7 +24,7 @@ A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets)
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
```
```sh
gcloud compute networks subnets create kubernetes \
--network kubernetes-the-hard-way \
--range 10.240.0.0/24
@ -36,7 +36,7 @@ gcloud compute networks subnets create kubernetes \
Create a firewall rule that allows internal communication across all protocols:
```
```sh
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
--allow tcp,udp,icmp \
--network kubernetes-the-hard-way \
@ -45,7 +45,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
```
```sh
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
--allow tcp:22,tcp:6443,icmp \
--network kubernetes-the-hard-way \
@ -56,13 +56,13 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
```
```sh
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
```
> output
```
```sh
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp
@ -72,20 +72,20 @@ kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
```
```sh
gcloud compute addresses create kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region)
```
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
```
```sh
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
```
> output
```
```sh
NAME REGION ADDRESS STATUS
kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
```
@ -98,7 +98,7 @@ The compute instances in this lab will be provisioned using [Ubuntu Server](http
Create three compute instances which will host the Kubernetes control plane:
```
```sh
for i in 0 1 2; do
gcloud compute instances create controller-${i} \
--async \
@ -122,7 +122,7 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste
Create three compute instances which will host the Kubernetes worker nodes:
```
```sh
for i in 0 1 2; do
gcloud compute instances create worker-${i} \
--async \
@ -143,13 +143,13 @@ done
List the compute instances in your default compute zone:
```
```sh
gcloud compute instances list
```
> output
```
```sh
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING
controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING
@ -165,13 +165,13 @@ SSH will be used to configure the controller and worker instances. When connecti
Test SSH access to the `controller-0` compute instances:
```
```sh
gcloud compute ssh controller-0
```
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue:
```
```sh
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
@ -183,7 +183,7 @@ Enter same passphrase again:
At this point the generated SSH keys will be uploaded and stored in your project:
```
```sh
Your identification has been saved in /home/$USER/.ssh/google_compute_engine.
Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub.
The key fingerprint is:
@ -207,21 +207,21 @@ Waiting for SSH key to propagate.
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
```
```sh
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1042-gcp x86_64)
...
Last login: Sun Sept 14 14:34:27 2019 from XX.XXX.XXX.XX
```
Type `exit` at the prompt to exit the `controller-0` compute instance:
```
```sh
$USER@controller-0:~$ exit
```
> output
```
```sh
logout
Connection to XX.XXX.XXX.XXX closed
```

View File

@ -8,9 +8,7 @@ In this section you will provision a Certificate Authority that can be used to g
Generate the CA configuration file, certificate, and private key:
```
{
```sh
cat > ca-config.json <<EOF
{
"signing": {
@ -47,13 +45,11 @@ cat > ca-csr.json <<EOF
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
```
Results:
```
```sh
ca-key.pem
ca.pem
```
@ -66,9 +62,7 @@ In this section you will generate client and server certificates for each Kubern
Generate the `admin` client certificate and private key:
```
{
```sh
cat > admin-csr.json <<EOF
{
"CN": "admin",
@ -94,13 +88,11 @@ cfssl gencert \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
}
```
Results:
```
```sh
admin-key.pem
admin.pem
```
@ -111,7 +103,7 @@ Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/doc
Generate a certificate and private key for each Kubernetes worker node:
```
```sh
for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
{
@ -150,7 +142,7 @@ done
Results:
```
```sh
worker-0-key.pem
worker-0.pem
worker-1-key.pem
@ -163,9 +155,7 @@ worker-2.pem
Generate the `kube-controller-manager` client certificate and private key:
```
{
```sh
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
@ -191,25 +181,20 @@ cfssl gencert \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
```
Results:
```
```sh
kube-controller-manager-key.pem
kube-controller-manager.pem
```
### The Kube Proxy Client Certificate
Generate the `kube-proxy` client certificate and private key:
```
{
```sh
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
@ -235,13 +220,11 @@ cfssl gencert \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
}
```
Results:
```
```sh
kube-proxy-key.pem
kube-proxy.pem
```
@ -250,9 +233,7 @@ kube-proxy.pem
Generate the `kube-scheduler` client certificate and private key:
```
{
```sh
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
@ -278,27 +259,22 @@ cfssl gencert \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}
```
Results:
```
```sh
kube-scheduler-key.pem
kube-scheduler.pem
```
### The Kubernetes API Server Certificate
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
Generate the Kubernetes API Server certificate and private key:
```
{
```sh
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
@ -331,15 +307,13 @@ cfssl gencert \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
}
```
> The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server) lab.
Results:
```
```sh
kubernetes-key.pem
kubernetes.pem
```
@ -350,9 +324,7 @@ The Kubernetes Controller Manager leverages a key pair to generate and sign serv
Generate the `service-account` certificate and private key:
```
{
```sh
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
@ -378,13 +350,11 @@ cfssl gencert \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
}
```
Results:
```
```sh
service-account-key.pem
service-account.pem
```
@ -394,7 +364,7 @@ service-account.pem
Copy the appropriate certificates and private keys to each worker instance:
```
```sh
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done
@ -402,7 +372,7 @@ done
Copy the appropriate certificates and private keys to each controller instance:
```
```sh
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem ${instance}:~/

View File

@ -12,7 +12,7 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
Retrieve the `kubernetes-the-hard-way` static IP address:
```
```sh
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
@ -26,7 +26,7 @@ When generating kubeconfig files for Kubelets the client certificate matching th
Generate a kubeconfig file for each worker node:
```
```sh
for instance in worker-0 worker-1 worker-2; do
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
@ -51,7 +51,7 @@ done
Results:
```
```sh
worker-0.kubeconfig
worker-1.kubeconfig
worker-2.kubeconfig
@ -61,32 +61,30 @@ worker-2.kubeconfig
Generate a kubeconfig file for the `kube-proxy` service:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
```sh
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
```
Results:
```
```sh
kube-proxy.kubeconfig
```
@ -94,32 +92,30 @@ kube-proxy.kubeconfig
Generate a kubeconfig file for the `kube-controller-manager` service:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
```sh
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
```
Results:
```
```sh
kube-controller-manager.kubeconfig
```
@ -128,32 +124,30 @@ kube-controller-manager.kubeconfig
Generate a kubeconfig file for the `kube-scheduler` service:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
```sh
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
```
Results:
```
```sh
kube-scheduler.kubeconfig
```
@ -161,7 +155,7 @@ kube-scheduler.kubeconfig
Generate a kubeconfig file for the `admin` user:
```
```sh
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
@ -186,18 +180,15 @@ Generate a kubeconfig file for the `admin` user:
Results:
```
```sh
admin.kubeconfig
```
##
## Distribute the Kubernetes Configuration Files
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
```
```sh
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done
@ -205,7 +196,7 @@ done
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
```
```sh
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done

View File

@ -8,7 +8,7 @@ In this lab you will generate an encryption key and an [encryption config](https
Generate an encryption key:
```
```sh
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
```
@ -16,7 +16,7 @@ ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
Create the `encryption-config.yaml` encryption config file:
```
```sh
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
@ -34,7 +34,7 @@ EOF
Copy the `encryption-config.yaml` encryption config file to each controller instance:
```
```sh
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/
done

View File

@ -6,7 +6,7 @@ Kubernetes components are stateless and store cluster state in [etcd](https://gi
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
```
```sh
gcloud compute ssh controller-0
```
@ -20,45 +20,41 @@ gcloud compute ssh controller-0
Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project:
```
```sh
wget -q --show-progress --https-only --timestamping \
"https://github.com/etcd-io/etcd/releases/download/v3.4.0/etcd-v3.4.0-linux-amd64.tar.gz"
```
Extract and install the `etcd` server and the `etcdctl` command line utility:
```
{
tar -xvf etcd-v3.4.0-linux-amd64.tar.gz
sudo mv etcd-v3.4.0-linux-amd64/etcd* /usr/local/bin/
}
```sh
tar -xvf etcd-v3.4.0-linux-amd64.tar.gz
sudo mv etcd-v3.4.0-linux-amd64/etcd* /usr/local/bin/
```
### Configure the etcd Server
```
{
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
}
```sh
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
```
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
```
```sh
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
```
```sh
ETCD_NAME=$(hostname -s)
```
Create the `etcd.service` systemd unit file:
```
```sh
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
@ -94,12 +90,10 @@ EOF
### Start the etcd Server
```
{
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
}
```sh
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
@ -108,7 +102,7 @@ EOF
List the etcd cluster members:
```
```sh
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
@ -118,7 +112,7 @@ sudo ETCDCTL_API=3 etcdctl member list \
> output
```
```sh
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379

View File

@ -6,7 +6,7 @@ In this lab you will bootstrap the Kubernetes control plane across three compute
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
```
```sh
gcloud compute ssh controller-0
```
@ -18,7 +18,7 @@ gcloud compute ssh controller-0
Create the Kubernetes configuration directory:
```
```sh
sudo mkdir -p /etc/kubernetes/config
```
@ -26,7 +26,7 @@ sudo mkdir -p /etc/kubernetes/config
Download the official Kubernetes release binaries:
```
```sh
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-controller-manager" \
@ -36,35 +36,31 @@ wget -q --show-progress --https-only --timestamping \
Install the Kubernetes binaries:
```
{
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
}
```sh
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
```
### Configure the Kubernetes API Server
```
{
sudo mkdir -p /var/lib/kubernetes/
```sh
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
```
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
```
```sh
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
Create the `kube-apiserver.service` systemd unit file:
```
```sh
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
@ -112,13 +108,13 @@ EOF
Move the `kube-controller-manager` kubeconfig into place:
```
```sh
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
```
Create the `kube-controller-manager.service` systemd unit file:
```
```sh
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
@ -150,13 +146,13 @@ EOF
Move the `kube-scheduler` kubeconfig into place:
```
```sh
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
```
Create the `kube-scheduler.yaml` configuration file:
```
```sh
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
@ -169,7 +165,7 @@ EOF
Create the `kube-scheduler.service` systemd unit file:
```
```sh
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
@ -189,12 +185,10 @@ EOF
### Start the Controller Services
```
{
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
}
```sh
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
```
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
@ -207,12 +201,12 @@ A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-bala
Install a basic web server to handle HTTP health checks:
```
```sh
sudo apt-get update
sudo apt-get install -y nginx
```
```
```sh
cat > kubernetes.default.svc.cluster.local <<EOF
server {
listen 80;
@ -224,32 +218,30 @@ server {
}
}
EOF
```sh
```sh
sudo mv kubernetes.default.svc.cluster.local \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
```
```
{
sudo mv kubernetes.default.svc.cluster.local \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
}
```
```
```sh
sudo systemctl restart nginx
```
```
```sh
sudo systemctl enable nginx
```
### Verification
### Component Verification
```
```sh
kubectl get componentstatuses --kubeconfig admin.kubeconfig
```
```
```sh
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
@ -260,11 +252,11 @@ etcd-1 Healthy {"health": "true"}
Test the nginx HTTP health check proxy:
```
```sh
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
```
```
```sh
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Sat, 14 Sep 2019 18:34:11 GMT
@ -286,13 +278,13 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
```
```sh
gcloud compute ssh controller-0
```
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
```
```sh
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
@ -320,7 +312,7 @@ The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
```
```sh
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
@ -349,43 +341,41 @@ In this section you will provision an external load balancer to front the Kubern
Create the external load balancer network resources:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
```sh
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
gcloud compute http-health-checks create kubernetes \
gcloud compute http-health-checks create kubernetes \
--description "Kubernetes Health Check" \
--host "kubernetes.default.svc.cluster.local" \
--request-path "/healthz"
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
--allow tcp
gcloud compute target-pools create kubernetes-target-pool \
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes
gcloud compute target-pools add-instances kubernetes-target-pool \
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
}
```
### Verification
### Instance Verification
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
Retrieve the `kubernetes-the-hard-way` static IP address:
```
```sh
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
@ -393,24 +383,22 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-har
Make a HTTP request for the Kubernetes version info:
```
```sh
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
```
> output
```
{
"major": "1",
"minor": "15",
"gitVersion": "v1.15.3",
"gitCommit": "2d3c76f9091b6bec110a5e63777c332469e0cba2",
"gitTreeState": "clean",
"buildDate": "2019-08-19T11:05:50Z",
"goVersion": "go1.12.9",
"compiler": "gc",
"platform": "linux/amd64"
}
```sh
"major": "1",
"minor": "15",
"gitVersion": "v1.15.3",
"gitCommit": "2d3c76f9091b6bec110a5e63777c332469e0cba2",
"gitTreeState": "clean",
"buildDate": "2019-08-19T11:05:50Z",
"goVersion": "go1.12.9",
"compiler": "gc",
"platform": "linux/amd64"
```
Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md)

View File

@ -6,7 +6,7 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
```
```sh
gcloud compute ssh worker-0
```
@ -18,11 +18,9 @@ gcloud compute ssh worker-0
Install the OS dependencies:
```
{
sudo apt-get update
sudo apt-get -y install socat conntrack ipset
}
```sh
sudo apt-get update
sudo apt-get -y install socat conntrack ipset
```
> The socat binary enables support for the `kubectl port-forward` command.
@ -33,13 +31,13 @@ By default the kubelet will fail to start if [swap](https://help.ubuntu.com/comm
Verify if swap is enabled:
```
```sh
sudo swapon --show
```
If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
```
```sh
sudo swapoff -a
```
@ -47,7 +45,7 @@ sudo swapoff -a
### Download and Install Worker Binaries
```
```sh
wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \
@ -60,7 +58,7 @@ wget -q --show-progress --https-only --timestamping \
Create the installation directories:
```
```sh
sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
@ -72,31 +70,29 @@ sudo mkdir -p \
Install the worker binaries:
```
{
mkdir containerd
tar -xvf crictl-v1.15.0-linux-amd64.tar.gz
tar -xvf containerd-1.2.9.linux-amd64.tar.gz -C containerd
sudo tar -xvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/
sudo mv runc.amd64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
sudo mv containerd/bin/* /bin/
}
```sh
mkdir containerd
tar -xvf crictl-v1.15.0-linux-amd64.tar.gz
tar -xvf containerd-1.2.9.linux-amd64.tar.gz -C containerd
sudo tar -xvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/
sudo mv runc.amd64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
sudo mv containerd/bin/* /bin/
```
### Configure CNI Networking
Retrieve the Pod CIDR range for the current compute instance:
```
```sh
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
```
Create the `bridge` network configuration file:
```
```sh
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.3.1",
@ -118,7 +114,7 @@ EOF
Create the `loopback` network configuration file:
```
```sh
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.3.1",
@ -132,11 +128,11 @@ EOF
Create the `containerd` configuration file:
```
```sh
sudo mkdir -p /etc/containerd/
```
```
```sh
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
@ -150,7 +146,7 @@ EOF
Create the `containerd.service` systemd unit file:
```
```sh
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
@ -176,17 +172,15 @@ EOF
### Configure the Kubelet
```
{
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.pem /var/lib/kubernetes/
}
```sh
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.pem /var/lib/kubernetes/
```
Create the `kubelet-config.yaml` configuration file:
```
```sh
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
@ -214,7 +208,7 @@ EOF
Create the `kubelet.service` systemd unit file:
```
```sh
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
@ -242,13 +236,13 @@ EOF
### Configure the Kubernetes Proxy
```
```sh
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
```
Create the `kube-proxy-config.yaml` configuration file:
```
```sh
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
@ -261,7 +255,7 @@ EOF
Create the `kube-proxy.service` systemd unit file:
```
```sh
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
@ -280,12 +274,10 @@ EOF
### Start the Worker Services
```
{
sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy
}
```sh
sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy
```
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
@ -296,14 +288,14 @@ EOF
List the registered Kubernetes nodes:
```
```sh
gcloud compute ssh controller-0 \
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
```
> output
```
```sh
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 15s v1.15.3
worker-1 Ready <none> 15s v1.15.3

View File

@ -10,40 +10,38 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
Generate a kubeconfig file suitable for authenticating as the `admin` user:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
```sh
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
kubectl config set-cluster kubernetes-the-hard-way \
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
kubectl config set-credentials admin \
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
}
kubectl config use-context kubernetes-the-hard-way
```
## Verification
Check the health of the remote Kubernetes cluster:
```
```sh
kubectl get componentstatuses
```
> output
```
```sh
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
@ -54,13 +52,13 @@ etcd-0 Healthy {"health":"true"}
List the nodes in the remote Kubernetes cluster:
```
```sh
kubectl get nodes
```
> output
```
```sh
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 2m9s v1.15.3
worker-1 Ready <none> 2m9s v1.15.3

View File

@ -12,7 +12,7 @@ In this section you will gather the information required to create routes in the
Print the internal IP address and Pod CIDR range for each worker instance:
```
```sh
for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
@ -21,7 +21,7 @@ done
> output
```
```sh
10.240.0.20 10.200.0.0/24
10.240.0.21 10.200.1.0/24
10.240.0.22 10.200.2.0/24
@ -31,7 +31,7 @@ done
Create network routes for each worker instance:
```
```sh
for i in 0 1 2; do
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
--network kubernetes-the-hard-way \
@ -42,13 +42,13 @@ done
List the routes in the `kubernetes-the-hard-way` VPC network:
```
```sh
gcloud compute routes list --filter "network: kubernetes-the-hard-way"
```
> output
```
```sh
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
default-route-081879136902de56 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000
default-route-55199a5aa126d7aa kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000

View File

@ -6,13 +6,13 @@ In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts
Deploy the `coredns` cluster add-on:
```
```sh
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
```
> output
```
```sh
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
@ -23,13 +23,13 @@ service/kube-dns created
List the pods created by the `kube-dns` deployment:
```
```sh
kubectl get pods -l k8s-app=kube-dns -n kube-system
```
> output
```
```sh
NAME READY STATUS RESTARTS AGE
coredns-699f8ddd77-94qv9 1/1 Running 0 20s
coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
@ -39,38 +39,38 @@ coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
Create a `busybox` deployment:
```
```sh
kubectl run --generator=run-pod/v1 busybox --image=busybox:1.28 --command -- sleep 3600
```
List the pod created by the `busybox` deployment:
```
```sh
kubectl get pods -l run=busybox
```
> output
```
```sh
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 3s
```
Retrieve the full name of the `busybox` pod:
```
```sh
POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
```
Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
```
```sh
kubectl exec -ti $POD_NAME -- nslookup kubernetes
```
> output
```
```sh
Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local

View File

@ -8,14 +8,14 @@ In this section you will verify the ability to [encrypt secret data at rest](htt
Create a generic secret:
```
```sh
kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata"
```
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
```
```sh
gcloud compute ssh controller-0 \
--command "sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \
@ -27,7 +27,7 @@ gcloud compute ssh controller-0 \
> output
```
```sh
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
@ -53,19 +53,19 @@ In this section you will verify the ability to create and manage [Deployments](h
Create a deployment for the [nginx](https://nginx.org/en/) web server:
```
```sh
kubectl create deployment nginx --image=nginx
```
List the pod created by the `nginx` deployment:
```
```sh
kubectl get pods -l app=nginx
```
> output
```
```sh
NAME READY STATUS RESTARTS AGE
nginx-554b9c67f9-vt5rn 1/1 Running 0 10s
```
@ -76,32 +76,32 @@ In this section you will verify the ability to access applications remotely usin
Retrieve the full name of the `nginx` pod:
```
```sh
POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
```
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
```
```sh
kubectl port-forward $POD_NAME 8080:80
```
> output
```
```sh
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
```
In a new terminal make an HTTP request using the forwarding address:
```
```sh
curl --head http://127.0.0.1:8080
```
> output
```
```sh
HTTP/1.1 200 OK
Server: nginx/1.17.3
Date: Sat, 14 Sep 2019 21:10:11 GMT
@ -115,7 +115,7 @@ Accept-Ranges: bytes
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
```
```sh
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
@ -128,13 +128,13 @@ In this section you will verify the ability to [retrieve container logs](https:/
Print the `nginx` pod logs:
```
```sh
kubectl logs $POD_NAME
```
> output
```
```sh
127.0.0.1 - - [14/Sep/2019:21:10:11 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.52.1" "-"
```
@ -144,13 +144,13 @@ In this section you will verify the ability to [execute commands in a container]
Print the nginx version by executing the `nginx -v` command in the `nginx` container:
```
```sh
kubectl exec -ti $POD_NAME -- nginx -v
```
> output
```
```sh
nginx version: nginx/1.17.3
```
@ -160,7 +160,7 @@ In this section you will verify the ability to expose applications using a [Serv
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
```
```sh
kubectl expose deployment nginx --port 80 --type NodePort
```
@ -168,14 +168,14 @@ kubectl expose deployment nginx --port 80 --type NodePort
Retrieve the node port assigned to the `nginx` service:
```
```sh
NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
```
Create a firewall rule that allows remote access to the `nginx` node port:
```
```sh
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \
--network kubernetes-the-hard-way
@ -183,20 +183,20 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service
Retrieve the external IP address of a worker instance:
```
```sh
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
```
Make an HTTP request using the external IP address and the `nginx` node port:
```
```sh
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
```
> output
```
```sh
HTTP/1.1 200 OK
Server: nginx/1.17.3
Date: Sat, 14 Sep 2019 21:12:35 GMT

View File

@ -6,7 +6,7 @@ In this lab you will delete the compute resources created during this tutorial.
Delete the controller and worker compute instances:
```
```sh
gcloud -q compute instances delete \
controller-0 controller-1 controller-2 \
worker-0 worker-1 worker-2 \
@ -17,22 +17,20 @@ gcloud -q compute instances delete \
Delete the external load balancer network resources:
```
{
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
```sh
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region)
gcloud -q compute target-pools delete kubernetes-target-pool
gcloud -q compute target-pools delete kubernetes-target-pool
gcloud -q compute http-health-checks delete kubernetes
gcloud -q compute http-health-checks delete kubernetes
gcloud -q compute addresses delete kubernetes-the-hard-way
}
gcloud -q compute addresses delete kubernetes-the-hard-way
```
Delete the `kubernetes-the-hard-way` firewall rules:
```
```sh
gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \
@ -42,15 +40,13 @@ gcloud -q compute firewall-rules delete \
Delete the `kubernetes-the-hard-way` network VPC:
```
{
gcloud -q compute routes delete \
```sh
gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
gcloud -q compute networks subnets delete kubernetes
gcloud -q compute networks subnets delete kubernetes
gcloud -q compute networks delete kubernetes-the-hard-way
}
gcloud -q compute networks delete kubernetes-the-hard-way
```