diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 78bff75..2eb237c 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,3 +1,5 @@ +# Contributing + This project is made possible by contributors like YOU! While all contributions are welcomed, please be sure and follow the following suggestions to help your PR get merged. ## License @@ -15,4 +17,3 @@ Here are some examples of the review and justification process: ## Notes on minutiae If you find a bug that breaks the guide, please do submit it. If you are considering a minor copy edit for tone, grammar, or simple inconsistent whitespace, consider the tradeoff between maintainer time and community benefit before investing too much of your time. - diff --git a/docs/01-prerequisites.md b/docs/01-prerequisites.md index 01c4d13..e206cb1 100644 --- a/docs/01-prerequisites.md +++ b/docs/01-prerequisites.md @@ -16,7 +16,7 @@ Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to in Verify the Google Cloud SDK version is 262.0.0 or higher: -``` +```bash gcloud version ``` @@ -26,25 +26,25 @@ This tutorial assumes a default compute region and zone have been configured. If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this: -``` +```bash gcloud init ``` Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials: -``` +```bash gcloud auth login ``` Next set a default compute region and compute zone: -``` +```bash gcloud config set compute/region us-west1 ``` Set a default compute zone: -``` +```bash gcloud config set compute/zone us-west1-c ``` diff --git a/docs/02-client-tools.md b/docs/02-client-tools.md index 2252c96..34141b1 100644 --- a/docs/02-client-tools.md +++ b/docs/02-client-tools.md @@ -2,7 +2,6 @@ In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl). - ## Install CFSSL The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates. @@ -11,38 +10,38 @@ Download and install `cfssl` and `cfssljson`: ### OS X -``` +```bash curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssl curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssljson ``` -``` +```bash chmod +x cfssl cfssljson ``` -``` +```bash sudo mv cfssl cfssljson /usr/local/bin/ ``` Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option: -``` +```bash brew install cfssl ``` ### Linux -``` +```bash wget -q --show-progress --https-only --timestamping \ https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssl \ https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssljson ``` -``` +```bash chmod +x cfssl cfssljson ``` -``` +```bash sudo mv cfssl cfssljson /usr/local/bin/ ``` @@ -50,22 +49,23 @@ sudo mv cfssl cfssljson /usr/local/bin/ Verify `cfssl` and `cfssljson` version 1.3.4 or higher is installed: -``` +```bash cfssl version ``` > output -``` +```bash Version: 1.3.4 Revision: dev Runtime: go1.13 ``` -``` +```bash cfssljson --version ``` -``` + +```bash Version: 1.3.4 Revision: dev Runtime: go1.13 @@ -75,45 +75,45 @@ Runtime: go1.13 The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries: -### OS X +### Install on OS X -``` +```bash curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/darwin/amd64/kubectl ``` -``` +```bash chmod +x kubectl ``` -``` +```bash sudo mv kubectl /usr/local/bin/ ``` -### Linux +### Install on Linux -``` +```bash wget https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl ``` -``` +```bash chmod +x kubectl ``` -``` +```bash sudo mv kubectl /usr/local/bin/ ``` -### Verification +### Verification install Verify `kubectl` version 1.15.3 or higher is installed: -``` +```bash kubectl version --client ``` > output -``` +```bash Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} ``` diff --git a/docs/03-compute-resources.md b/docs/03-compute-resources.md index a30c520..cab9b0a 100644 --- a/docs/03-compute-resources.md +++ b/docs/03-compute-resources.md @@ -16,7 +16,7 @@ In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/com Create the `kubernetes-the-hard-way` custom VPC network: -``` +```bash gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom ``` @@ -24,7 +24,7 @@ A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: -``` +```bash gcloud compute networks subnets create kubernetes \ --network kubernetes-the-hard-way \ --range 10.240.0.0/24 @@ -36,7 +36,7 @@ gcloud compute networks subnets create kubernetes \ Create a firewall rule that allows internal communication across all protocols: -``` +```bash gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --allow tcp,udp,icmp \ --network kubernetes-the-hard-way \ @@ -45,7 +45,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ Create a firewall rule that allows external SSH, ICMP, and HTTPS: -``` +```bash gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ --allow tcp:22,tcp:6443,icmp \ --network kubernetes-the-hard-way \ @@ -56,13 +56,13 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ List the firewall rules in the `kubernetes-the-hard-way` VPC network: -``` +```bash gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" ``` > output -``` +```bash NAME NETWORK DIRECTION PRIORITY ALLOW DENY kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp @@ -72,20 +72,20 @@ kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers: -``` +```bash gcloud compute addresses create kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) ``` Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region: -``` +```bash gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" ``` > output -``` +```bash NAME REGION ADDRESS STATUS kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED ``` @@ -98,7 +98,7 @@ The compute instances in this lab will be provisioned using [Ubuntu Server](http Create three compute instances which will host the Kubernetes control plane: -``` +```bash for i in 0 1 2; do gcloud compute instances create controller-${i} \ --async \ @@ -122,7 +122,7 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste Create three compute instances which will host the Kubernetes worker nodes: -``` +```bash for i in 0 1 2; do gcloud compute instances create worker-${i} \ --async \ @@ -143,13 +143,13 @@ done List the compute instances in your default compute zone: -``` +```bash gcloud compute instances list ``` > output -``` +```bash NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING @@ -165,13 +165,13 @@ SSH will be used to configure the controller and worker instances. When connecti Test SSH access to the `controller-0` compute instances: -``` +```bash gcloud compute ssh controller-0 ``` If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue: -``` +```bash WARNING: The public SSH key file for gcloud does not exist. WARNING: The private SSH key file for gcloud does not exist. WARNING: You do not have an SSH key for gcloud. @@ -183,7 +183,7 @@ Enter same passphrase again: At this point the generated SSH keys will be uploaded and stored in your project: -``` +```bash Your identification has been saved in /home/$USER/.ssh/google_compute_engine. Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub. The key fingerprint is: @@ -207,7 +207,7 @@ Waiting for SSH key to propagate. After the SSH keys have been updated you'll be logged into the `controller-0` instance: -``` +```bash Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1042-gcp x86_64) ... @@ -216,12 +216,13 @@ Last login: Sun Sept 14 14:34:27 2019 from XX.XXX.XXX.XX Type `exit` at the prompt to exit the `controller-0` compute instance: -``` +```bash $USER@controller-0:~$ exit ``` + > output -``` +```bash logout Connection to XX.XXX.XXX.XXX closed ``` diff --git a/docs/04-certificate-authority.md b/docs/04-certificate-authority.md index 1510993..8ce7ec7 100644 --- a/docs/04-certificate-authority.md +++ b/docs/04-certificate-authority.md @@ -8,7 +8,7 @@ In this section you will provision a Certificate Authority that can be used to g Generate the CA configuration file, certificate, and private key: -``` +```bash { cat > ca-config.json < admin-csr.json < ${instance}-csr.json < kube-controller-manager-csr.json < kube-proxy-csr.json < kube-scheduler-csr.json < service-account-csr.json < encryption-config.yaml < output -``` +```bash 3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379 f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379 ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379 diff --git a/docs/08-bootstrapping-kubernetes-controllers.md b/docs/08-bootstrapping-kubernetes-controllers.md index 3d0cbca..863a49d 100644 --- a/docs/08-bootstrapping-kubernetes-controllers.md +++ b/docs/08-bootstrapping-kubernetes-controllers.md @@ -6,7 +6,7 @@ In this lab you will bootstrap the Kubernetes control plane across three compute The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: -``` +```bash gcloud compute ssh controller-0 ``` @@ -18,7 +18,7 @@ gcloud compute ssh controller-0 Create the Kubernetes configuration directory: -``` +```bash sudo mkdir -p /etc/kubernetes/config ``` @@ -26,7 +26,7 @@ sudo mkdir -p /etc/kubernetes/config Download the official Kubernetes release binaries: -``` +```bash wget -q --show-progress --https-only --timestamping \ "https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-apiserver" \ "https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-controller-manager" \ @@ -36,7 +36,7 @@ wget -q --show-progress --https-only --timestamping \ Install the Kubernetes binaries: -``` +```bash { chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ @@ -45,7 +45,7 @@ Install the Kubernetes binaries: ### Configure the Kubernetes API Server -``` +```bash { sudo mkdir -p /var/lib/kubernetes/ @@ -57,14 +57,14 @@ Install the Kubernetes binaries: The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance: -``` +```bash INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) ``` Create the `kube-apiserver.service` systemd unit file: -``` +```bash cat < kubernetes.default.svc.cluster.local < The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**. - ### Provision a Network Load Balancer Create the external load balancer network resources: -``` +```bash { KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ @@ -379,13 +378,13 @@ Create the external load balancer network resources: } ``` -### Verification +### LB Verification > The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**. Retrieve the `kubernetes-the-hard-way` static IP address: -``` +```bash KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') @@ -393,13 +392,13 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-har Make a HTTP request for the Kubernetes version info: -``` +```bash curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version ``` > output -``` +```bash { "major": "1", "minor": "15", diff --git a/docs/09-bootstrapping-kubernetes-workers.md b/docs/09-bootstrapping-kubernetes-workers.md index 6dd752d..f4b0c1b 100644 --- a/docs/09-bootstrapping-kubernetes-workers.md +++ b/docs/09-bootstrapping-kubernetes-workers.md @@ -6,7 +6,7 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: -``` +```bash gcloud compute ssh worker-0 ``` @@ -18,7 +18,7 @@ gcloud compute ssh worker-0 Install the OS dependencies: -``` +```bash { sudo apt-get update sudo apt-get -y install socat conntrack ipset @@ -33,13 +33,13 @@ By default the kubelet will fail to start if [swap](https://help.ubuntu.com/comm Verify if swap is enabled: -``` +```bash sudo swapon --show ``` If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately: -``` +```bash sudo swapoff -a ``` @@ -47,7 +47,7 @@ sudo swapoff -a ### Download and Install Worker Binaries -``` +```bash wget -q --show-progress --https-only --timestamping \ https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \ https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \ @@ -60,7 +60,7 @@ wget -q --show-progress --https-only --timestamping \ Create the installation directories: -``` +```bash sudo mkdir -p \ /etc/cni/net.d \ /opt/cni/bin \ @@ -72,7 +72,7 @@ sudo mkdir -p \ Install the worker binaries: -``` +```bash { mkdir containerd tar -xvf crictl-v1.15.0-linux-amd64.tar.gz @@ -89,14 +89,14 @@ Install the worker binaries: Retrieve the Pod CIDR range for the current compute instance: -``` +```bash POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) ``` Create the `bridge` network configuration file: -``` +```bash cat < The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`. +> The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`. Create the `kubelet.service` systemd unit file: -``` +```bash cat < output -``` +```bash NAME STATUS ROLES AGE VERSION worker-0 Ready 15s v1.15.3 worker-1 Ready 15s v1.15.3 diff --git a/docs/10-configuring-kubectl.md b/docs/10-configuring-kubectl.md index c64a434..84177f3 100644 --- a/docs/10-configuring-kubectl.md +++ b/docs/10-configuring-kubectl.md @@ -10,7 +10,7 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high Generate a kubeconfig file suitable for authenticating as the `admin` user: -``` +```bash { KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ @@ -37,13 +37,13 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user: Check the health of the remote Kubernetes cluster: -``` +```bash kubectl get componentstatuses ``` > output -``` +```bash NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok @@ -54,13 +54,13 @@ etcd-0 Healthy {"health":"true"} List the nodes in the remote Kubernetes cluster: -``` +```bash kubectl get nodes ``` > output -``` +```bash NAME STATUS ROLES AGE VERSION worker-0 Ready 2m9s v1.15.3 worker-1 Ready 2m9s v1.15.3 diff --git a/docs/11-pod-network-routes.md b/docs/11-pod-network-routes.md index c9f0b6a..0e7ccd4 100644 --- a/docs/11-pod-network-routes.md +++ b/docs/11-pod-network-routes.md @@ -12,7 +12,7 @@ In this section you will gather the information required to create routes in the Print the internal IP address and Pod CIDR range for each worker instance: -``` +```bash for instance in worker-0 worker-1 worker-2; do gcloud compute instances describe ${instance} \ --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' @@ -21,7 +21,7 @@ done > output -``` +```bash 10.240.0.20 10.200.0.0/24 10.240.0.21 10.200.1.0/24 10.240.0.22 10.200.2.0/24 @@ -31,7 +31,7 @@ done Create network routes for each worker instance: -``` +```bash for i in 0 1 2; do gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ --network kubernetes-the-hard-way \ @@ -42,13 +42,13 @@ done List the routes in the `kubernetes-the-hard-way` VPC network: -``` +```bash gcloud compute routes list --filter "network: kubernetes-the-hard-way" ``` > output -``` +```bash NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY default-route-081879136902de56 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000 default-route-55199a5aa126d7aa kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 diff --git a/docs/12-dns-addon.md b/docs/12-dns-addon.md index f7a5d43..d24dc84 100644 --- a/docs/12-dns-addon.md +++ b/docs/12-dns-addon.md @@ -6,13 +6,13 @@ In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts Deploy the `coredns` cluster add-on: -``` +```bash kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml ``` > output -``` +```bash serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created @@ -23,13 +23,13 @@ service/kube-dns created List the pods created by the `kube-dns` deployment: -``` +```bash kubectl get pods -l k8s-app=kube-dns -n kube-system ``` > output -``` +```bash NAME READY STATUS RESTARTS AGE coredns-699f8ddd77-94qv9 1/1 Running 0 20s coredns-699f8ddd77-gtcgb 1/1 Running 0 20s @@ -39,38 +39,38 @@ coredns-699f8ddd77-gtcgb 1/1 Running 0 20s Create a `busybox` deployment: -``` +```bash kubectl run --generator=run-pod/v1 busybox --image=busybox:1.28 --command -- sleep 3600 ``` List the pod created by the `busybox` deployment: -``` +```bash kubectl get pods -l run=busybox ``` > output -``` +```bash NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 3s ``` Retrieve the full name of the `busybox` pod: -``` +```bash POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") ``` Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod: -``` +```bash kubectl exec -ti $POD_NAME -- nslookup kubernetes ``` > output -``` +```bash Server: 10.32.0.10 Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local diff --git a/docs/13-smoke-test.md b/docs/13-smoke-test.md index ed90844..5b65e73 100644 --- a/docs/13-smoke-test.md +++ b/docs/13-smoke-test.md @@ -8,14 +8,14 @@ In this section you will verify the ability to [encrypt secret data at rest](htt Create a generic secret: -``` +```bash kubectl create secret generic kubernetes-the-hard-way \ --from-literal="mykey=mydata" ``` Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: -``` +```bash gcloud compute ssh controller-0 \ --command "sudo ETCDCTL_API=3 etcdctl get \ --endpoints=https://127.0.0.1:2379 \ @@ -27,7 +27,7 @@ gcloud compute ssh controller-0 \ > output -``` +```bash 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret| 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern| 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa| @@ -53,19 +53,19 @@ In this section you will verify the ability to create and manage [Deployments](h Create a deployment for the [nginx](https://nginx.org/en/) web server: -``` +```bash kubectl create deployment nginx --image=nginx ``` List the pod created by the `nginx` deployment: -``` +```bash kubectl get pods -l app=nginx ``` > output -``` +```bash NAME READY STATUS RESTARTS AGE nginx-554b9c67f9-vt5rn 1/1 Running 0 10s ``` @@ -76,32 +76,32 @@ In this section you will verify the ability to access applications remotely usin Retrieve the full name of the `nginx` pod: -``` +```bash POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}") ``` Forward port `8080` on your local machine to port `80` of the `nginx` pod: -``` +```bash kubectl port-forward $POD_NAME 8080:80 ``` > output -``` +```bash Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 ``` In a new terminal make an HTTP request using the forwarding address: -``` +```bash curl --head http://127.0.0.1:8080 ``` > output -``` +```bash HTTP/1.1 200 OK Server: nginx/1.17.3 Date: Sat, 14 Sep 2019 21:10:11 GMT @@ -115,7 +115,7 @@ Accept-Ranges: bytes Switch back to the previous terminal and stop the port forwarding to the `nginx` pod: -``` +```bash Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 Handling connection for 8080 @@ -128,13 +128,13 @@ In this section you will verify the ability to [retrieve container logs](https:/ Print the `nginx` pod logs: -``` +```bash kubectl logs $POD_NAME ``` > output -``` +```bash 127.0.0.1 - - [14/Sep/2019:21:10:11 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.52.1" "-" ``` @@ -144,13 +144,13 @@ In this section you will verify the ability to [execute commands in a container] Print the nginx version by executing the `nginx -v` command in the `nginx` container: -``` +```bash kubectl exec -ti $POD_NAME -- nginx -v ``` > output -``` +```bash nginx version: nginx/1.17.3 ``` @@ -160,7 +160,7 @@ In this section you will verify the ability to expose applications using a [Serv Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service: -``` +```bash kubectl expose deployment nginx --port 80 --type NodePort ``` @@ -168,14 +168,14 @@ kubectl expose deployment nginx --port 80 --type NodePort Retrieve the node port assigned to the `nginx` service: -``` +```bash NODE_PORT=$(kubectl get svc nginx \ --output=jsonpath='{range .spec.ports[0]}{.nodePort}') ``` Create a firewall rule that allows remote access to the `nginx` node port: -``` +```bash gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ --allow=tcp:${NODE_PORT} \ --network kubernetes-the-hard-way @@ -183,20 +183,20 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service Retrieve the external IP address of a worker instance: -``` +```bash EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') ``` Make an HTTP request using the external IP address and the `nginx` node port: -``` +```bash curl -I http://${EXTERNAL_IP}:${NODE_PORT} ``` > output -``` +```bash HTTP/1.1 200 OK Server: nginx/1.17.3 Date: Sat, 14 Sep 2019 21:12:35 GMT diff --git a/docs/14-cleanup.md b/docs/14-cleanup.md index 07be407..ca70efc 100644 --- a/docs/14-cleanup.md +++ b/docs/14-cleanup.md @@ -6,7 +6,7 @@ In this lab you will delete the compute resources created during this tutorial. Delete the controller and worker compute instances: -``` +```bash gcloud -q compute instances delete \ controller-0 controller-1 controller-2 \ worker-0 worker-1 worker-2 \ @@ -17,7 +17,7 @@ gcloud -q compute instances delete \ Delete the external load balancer network resources: -``` +```bash { gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ --region $(gcloud config get-value compute/region) @@ -32,7 +32,7 @@ Delete the external load balancer network resources: Delete the `kubernetes-the-hard-way` firewall rules: -``` +```bash gcloud -q compute firewall-rules delete \ kubernetes-the-hard-way-allow-nginx-service \ kubernetes-the-hard-way-allow-internal \ @@ -42,7 +42,7 @@ gcloud -q compute firewall-rules delete \ Delete the `kubernetes-the-hard-way` network VPC: -``` +```bash { gcloud -q compute routes delete \ kubernetes-route-10-200-0-0-24 \