diff --git a/docs/01-prerequisites.md b/docs/01-prerequisites.md index 01c4d13..34178be 100644 --- a/docs/01-prerequisites.md +++ b/docs/01-prerequisites.md @@ -16,7 +16,7 @@ Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to in Verify the Google Cloud SDK version is 262.0.0 or higher: -``` +```sh gcloud version ``` @@ -26,25 +26,25 @@ This tutorial assumes a default compute region and zone have been configured. If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this: -``` +```sh gcloud init ``` Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials: -``` +```sh gcloud auth login ``` Next set a default compute region and compute zone: -``` +```sh gcloud config set compute/region us-west1 ``` Set a default compute zone: -``` +```sh gcloud config set compute/zone us-west1-c ``` diff --git a/docs/02-client-tools.md b/docs/02-client-tools.md index 2252c96..03466d6 100644 --- a/docs/02-client-tools.md +++ b/docs/02-client-tools.md @@ -11,38 +11,38 @@ Download and install `cfssl` and `cfssljson`: ### OS X -``` +```sh curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssl curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssljson ``` -``` +```sh chmod +x cfssl cfssljson ``` -``` +```sh sudo mv cfssl cfssljson /usr/local/bin/ ``` Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option: -``` +```sh brew install cfssl ``` ### Linux -``` +```sh wget -q --show-progress --https-only --timestamping \ https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssl \ https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssljson ``` -``` +```sh chmod +x cfssl cfssljson ``` -``` +```sh sudo mv cfssl cfssljson /usr/local/bin/ ``` @@ -50,7 +50,7 @@ sudo mv cfssl cfssljson /usr/local/bin/ Verify `cfssl` and `cfssljson` version 1.3.4 or higher is installed: -``` +```sh cfssl version ``` @@ -62,7 +62,7 @@ Revision: dev Runtime: go1.13 ``` -``` +```sh cfssljson --version ``` ``` @@ -77,29 +77,29 @@ The `kubectl` command line utility is used to interact with the Kubernetes API S ### OS X -``` +```sh curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/darwin/amd64/kubectl ``` -``` +```sh chmod +x kubectl ``` -``` +```sh sudo mv kubectl /usr/local/bin/ ``` ### Linux -``` +```sh wget https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl ``` -``` +```sh chmod +x kubectl ``` -``` +```sh sudo mv kubectl /usr/local/bin/ ``` @@ -107,7 +107,7 @@ sudo mv kubectl /usr/local/bin/ Verify `kubectl` version 1.15.3 or higher is installed: -``` +```sh kubectl version --client ``` diff --git a/docs/03-compute-resources.md b/docs/03-compute-resources.md index a30c520..233a725 100644 --- a/docs/03-compute-resources.md +++ b/docs/03-compute-resources.md @@ -16,7 +16,7 @@ In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/com Create the `kubernetes-the-hard-way` custom VPC network: -``` +```sh gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom ``` @@ -24,7 +24,7 @@ A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: -``` +```sh gcloud compute networks subnets create kubernetes \ --network kubernetes-the-hard-way \ --range 10.240.0.0/24 @@ -36,7 +36,7 @@ gcloud compute networks subnets create kubernetes \ Create a firewall rule that allows internal communication across all protocols: -``` +```sh gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --allow tcp,udp,icmp \ --network kubernetes-the-hard-way \ @@ -45,7 +45,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ Create a firewall rule that allows external SSH, ICMP, and HTTPS: -``` +```sh gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ --allow tcp:22,tcp:6443,icmp \ --network kubernetes-the-hard-way \ @@ -56,7 +56,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ List the firewall rules in the `kubernetes-the-hard-way` VPC network: -``` +```sh gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" ``` @@ -72,14 +72,14 @@ kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers: -``` +```sh gcloud compute addresses create kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) ``` Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region: -``` +```sh gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" ``` @@ -98,7 +98,7 @@ The compute instances in this lab will be provisioned using [Ubuntu Server](http Create three compute instances which will host the Kubernetes control plane: -``` +```sh for i in 0 1 2; do gcloud compute instances create controller-${i} \ --async \ @@ -122,7 +122,7 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste Create three compute instances which will host the Kubernetes worker nodes: -``` +```sh for i in 0 1 2; do gcloud compute instances create worker-${i} \ --async \ @@ -143,7 +143,7 @@ done List the compute instances in your default compute zone: -``` +```sh gcloud compute instances list ``` @@ -165,7 +165,7 @@ SSH will be used to configure the controller and worker instances. When connecti Test SSH access to the `controller-0` compute instances: -``` +```sh gcloud compute ssh controller-0 ``` @@ -216,7 +216,7 @@ Last login: Sun Sept 14 14:34:27 2019 from XX.XXX.XXX.XX Type `exit` at the prompt to exit the `controller-0` compute instance: -``` +```sh $USER@controller-0:~$ exit ``` > output diff --git a/docs/04-certificate-authority.md b/docs/04-certificate-authority.md index 1510993..515d469 100644 --- a/docs/04-certificate-authority.md +++ b/docs/04-certificate-authority.md @@ -8,7 +8,7 @@ In this section you will provision a Certificate Authority that can be used to g Generate the CA configuration file, certificate, and private key: -``` +```sh { cat > ca-config.json < admin-csr.json < ${instance}-csr.json < kube-controller-manager-csr.json < kube-proxy-csr.json < kube-scheduler-csr.json < service-account-csr.json < encryption-config.yaml < kubernetes.default.svc.cluster.local < output -``` +```json { "major": "1", "minor": "15", diff --git a/docs/09-bootstrapping-kubernetes-workers.md b/docs/09-bootstrapping-kubernetes-workers.md index 6dd752d..6f1042e 100644 --- a/docs/09-bootstrapping-kubernetes-workers.md +++ b/docs/09-bootstrapping-kubernetes-workers.md @@ -6,7 +6,7 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: -``` +```sh gcloud compute ssh worker-0 ``` @@ -18,7 +18,7 @@ gcloud compute ssh worker-0 Install the OS dependencies: -``` +```sh { sudo apt-get update sudo apt-get -y install socat conntrack ipset @@ -33,13 +33,13 @@ By default the kubelet will fail to start if [swap](https://help.ubuntu.com/comm Verify if swap is enabled: -``` +```sh sudo swapon --show ``` If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately: -``` +```sh sudo swapoff -a ``` @@ -47,7 +47,7 @@ sudo swapoff -a ### Download and Install Worker Binaries -``` +```sh wget -q --show-progress --https-only --timestamping \ https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \ https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \ @@ -60,7 +60,7 @@ wget -q --show-progress --https-only --timestamping \ Create the installation directories: -``` +```sh sudo mkdir -p \ /etc/cni/net.d \ /opt/cni/bin \ @@ -72,14 +72,14 @@ sudo mkdir -p \ Install the worker binaries: -``` +```sh { mkdir containerd tar -xvf crictl-v1.15.0-linux-amd64.tar.gz tar -xvf containerd-1.2.9.linux-amd64.tar.gz -C containerd sudo tar -xvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/ sudo mv runc.amd64 runc - chmod +x crictl kubectl kube-proxy kubelet runc + chmod +x crictl kubectl kube-proxy kubelet runc sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/ sudo mv containerd/bin/* /bin/ } @@ -89,14 +89,14 @@ Install the worker binaries: Retrieve the Pod CIDR range for the current compute instance: -``` +```sh POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) ``` Create the `bridge` network configuration file: -``` +```sh cat < The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`. +> The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`. Create the `kubelet.service` systemd unit file: -``` +```sh cat < 80 In a new terminal make an HTTP request using the forwarding address: -``` +```sh curl --head http://127.0.0.1:8080 ``` @@ -128,7 +128,7 @@ In this section you will verify the ability to [retrieve container logs](https:/ Print the `nginx` pod logs: -``` +```sh kubectl logs $POD_NAME ``` @@ -144,7 +144,7 @@ In this section you will verify the ability to [execute commands in a container] Print the nginx version by executing the `nginx -v` command in the `nginx` container: -``` +```sh kubectl exec -ti $POD_NAME -- nginx -v ``` @@ -160,7 +160,7 @@ In this section you will verify the ability to expose applications using a [Serv Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service: -``` +```sh kubectl expose deployment nginx --port 80 --type NodePort ``` @@ -168,14 +168,14 @@ kubectl expose deployment nginx --port 80 --type NodePort Retrieve the node port assigned to the `nginx` service: -``` +```sh NODE_PORT=$(kubectl get svc nginx \ --output=jsonpath='{range .spec.ports[0]}{.nodePort}') ``` Create a firewall rule that allows remote access to the `nginx` node port: -``` +```sh gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ --allow=tcp:${NODE_PORT} \ --network kubernetes-the-hard-way @@ -183,14 +183,14 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service Retrieve the external IP address of a worker instance: -``` +```sh EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') ``` Make an HTTP request using the external IP address and the `nginx` node port: -``` +```sh curl -I http://${EXTERNAL_IP}:${NODE_PORT} ``` diff --git a/docs/14-cleanup.md b/docs/14-cleanup.md index 07be407..f630e73 100644 --- a/docs/14-cleanup.md +++ b/docs/14-cleanup.md @@ -6,7 +6,7 @@ In this lab you will delete the compute resources created during this tutorial. Delete the controller and worker compute instances: -``` +```sh gcloud -q compute instances delete \ controller-0 controller-1 controller-2 \ worker-0 worker-1 worker-2 \ @@ -17,7 +17,7 @@ gcloud -q compute instances delete \ Delete the external load balancer network resources: -``` +```sh { gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ --region $(gcloud config get-value compute/region) @@ -32,7 +32,7 @@ Delete the external load balancer network resources: Delete the `kubernetes-the-hard-way` firewall rules: -``` +```sh gcloud -q compute firewall-rules delete \ kubernetes-the-hard-way-allow-nginx-service \ kubernetes-the-hard-way-allow-internal \ @@ -42,7 +42,7 @@ gcloud -q compute firewall-rules delete \ Delete the `kubernetes-the-hard-way` network VPC: -``` +```sh { gcloud -q compute routes delete \ kubernetes-route-10-200-0-0-24 \