diff --git a/.gitignore b/.gitignore index 8033371..f823df1 100644 --- a/.gitignore +++ b/.gitignore @@ -1,50 +1,7 @@ -admin-csr.json -admin-key.pem -admin.csr -admin.pem -admin.kubeconfig -ca-config.json -ca-csr.json -ca-key.pem -ca.csr -ca.pem -encryption-config.yaml -kube-controller-manager-csr.json -kube-controller-manager-key.pem -kube-controller-manager.csr -kube-controller-manager.kubeconfig -kube-controller-manager.pem -kube-scheduler-csr.json -kube-scheduler-key.pem -kube-scheduler.csr -kube-scheduler.kubeconfig -kube-scheduler.pem -kube-proxy-csr.json -kube-proxy-key.pem -kube-proxy.csr -kube-proxy.kubeconfig -kube-proxy.pem -kubernetes-csr.json -kubernetes-key.pem -kubernetes.csr -kubernetes.pem -worker-0-csr.json -worker-0-key.pem -worker-0.csr -worker-0.kubeconfig -worker-0.pem -worker-1-csr.json -worker-1-key.pem -worker-1.csr -worker-1.kubeconfig -worker-1.pem -worker-2-csr.json -worker-2-key.pem -worker-2.csr -worker-2.kubeconfig -worker-2.pem -service-account-key.pem -service-account.csr -service-account.pem -service-account-csr.json *.swp +*.csr +*.pem +*.json +*.kubeconfig + +encryption-config.yaml \ No newline at end of file diff --git a/docs/01-prerequisites.md b/docs/01-prerequisites.md index 01c4d13..34178be 100644 --- a/docs/01-prerequisites.md +++ b/docs/01-prerequisites.md @@ -16,7 +16,7 @@ Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to in Verify the Google Cloud SDK version is 262.0.0 or higher: -``` +```sh gcloud version ``` @@ -26,25 +26,25 @@ This tutorial assumes a default compute region and zone have been configured. If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this: -``` +```sh gcloud init ``` Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials: -``` +```sh gcloud auth login ``` Next set a default compute region and compute zone: -``` +```sh gcloud config set compute/region us-west1 ``` Set a default compute zone: -``` +```sh gcloud config set compute/zone us-west1-c ``` diff --git a/docs/02-client-tools.md b/docs/02-client-tools.md index 2252c96..0aa5d5f 100644 --- a/docs/02-client-tools.md +++ b/docs/02-client-tools.md @@ -2,7 +2,6 @@ In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl). - ## Install CFSSL The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates. @@ -11,61 +10,62 @@ Download and install `cfssl` and `cfssljson`: ### OS X -``` +```sh curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssl curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssljson ``` -``` +```sh chmod +x cfssl cfssljson ``` -``` +```sh sudo mv cfssl cfssljson /usr/local/bin/ ``` Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option: -``` +```sh brew install cfssl ``` ### Linux -``` +```sh wget -q --show-progress --https-only --timestamping \ https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssl \ https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssljson ``` -``` +```sh chmod +x cfssl cfssljson ``` -``` +```sh sudo mv cfssl cfssljson /usr/local/bin/ ``` -### Verification +### cfssl Verification Verify `cfssl` and `cfssljson` version 1.3.4 or higher is installed: -``` +```sh cfssl version ``` > output -``` +```sh Version: 1.3.4 Revision: dev Runtime: go1.13 ``` -``` +```sh cfssljson --version ``` -``` + +```sh Version: 1.3.4 Revision: dev Runtime: go1.13 @@ -75,45 +75,45 @@ Runtime: go1.13 The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries: -### OS X +### kubectl on OS X -``` +```sh curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/darwin/amd64/kubectl ``` -``` +```sh chmod +x kubectl ``` -``` +```sh sudo mv kubectl /usr/local/bin/ ``` -### Linux +### kubectl on Linux -``` +```sh wget https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl ``` -``` +```sh chmod +x kubectl ``` -``` +```sh sudo mv kubectl /usr/local/bin/ ``` -### Verification +### kubectl Verification Verify `kubectl` version 1.15.3 or higher is installed: -``` +```sh kubectl version --client ``` > output -``` +```sh Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} ``` diff --git a/docs/03-compute-resources.md b/docs/03-compute-resources.md index a30c520..03b49d8 100644 --- a/docs/03-compute-resources.md +++ b/docs/03-compute-resources.md @@ -16,7 +16,7 @@ In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/com Create the `kubernetes-the-hard-way` custom VPC network: -``` +```sh gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom ``` @@ -24,7 +24,7 @@ A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: -``` +```sh gcloud compute networks subnets create kubernetes \ --network kubernetes-the-hard-way \ --range 10.240.0.0/24 @@ -36,7 +36,7 @@ gcloud compute networks subnets create kubernetes \ Create a firewall rule that allows internal communication across all protocols: -``` +```sh gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --allow tcp,udp,icmp \ --network kubernetes-the-hard-way \ @@ -45,7 +45,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ Create a firewall rule that allows external SSH, ICMP, and HTTPS: -``` +```sh gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ --allow tcp:22,tcp:6443,icmp \ --network kubernetes-the-hard-way \ @@ -56,13 +56,13 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ List the firewall rules in the `kubernetes-the-hard-way` VPC network: -``` +```sh gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" ``` > output -``` +```sh NAME NETWORK DIRECTION PRIORITY ALLOW DENY kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp @@ -72,20 +72,20 @@ kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers: -``` +```sh gcloud compute addresses create kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) ``` Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region: -``` +```sh gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" ``` > output -``` +```sh NAME REGION ADDRESS STATUS kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED ``` @@ -98,7 +98,7 @@ The compute instances in this lab will be provisioned using [Ubuntu Server](http Create three compute instances which will host the Kubernetes control plane: -``` +```sh for i in 0 1 2; do gcloud compute instances create controller-${i} \ --async \ @@ -122,7 +122,7 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste Create three compute instances which will host the Kubernetes worker nodes: -``` +```sh for i in 0 1 2; do gcloud compute instances create worker-${i} \ --async \ @@ -143,13 +143,13 @@ done List the compute instances in your default compute zone: -``` +```sh gcloud compute instances list ``` > output -``` +```sh NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING @@ -165,13 +165,13 @@ SSH will be used to configure the controller and worker instances. When connecti Test SSH access to the `controller-0` compute instances: -``` +```sh gcloud compute ssh controller-0 ``` If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue: -``` +```sh WARNING: The public SSH key file for gcloud does not exist. WARNING: The private SSH key file for gcloud does not exist. WARNING: You do not have an SSH key for gcloud. @@ -183,7 +183,7 @@ Enter same passphrase again: At this point the generated SSH keys will be uploaded and stored in your project: -``` +```sh Your identification has been saved in /home/$USER/.ssh/google_compute_engine. Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub. The key fingerprint is: @@ -207,21 +207,21 @@ Waiting for SSH key to propagate. After the SSH keys have been updated you'll be logged into the `controller-0` instance: -``` +```sh Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1042-gcp x86_64) ... - Last login: Sun Sept 14 14:34:27 2019 from XX.XXX.XXX.XX ``` Type `exit` at the prompt to exit the `controller-0` compute instance: -``` +```sh $USER@controller-0:~$ exit ``` + > output -``` +```sh logout Connection to XX.XXX.XXX.XXX closed ``` diff --git a/docs/04-certificate-authority.md b/docs/04-certificate-authority.md index 1510993..1abddd8 100644 --- a/docs/04-certificate-authority.md +++ b/docs/04-certificate-authority.md @@ -8,9 +8,7 @@ In this section you will provision a Certificate Authority that can be used to g Generate the CA configuration file, certificate, and private key: -``` -{ - +```sh cat > ca-config.json < ca-csr.json < admin-csr.json < ${instance}-csr.json < kube-controller-manager-csr.json < kube-proxy-csr.json < kube-scheduler-csr.json < The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server) lab. Results: -``` +```sh kubernetes-key.pem kubernetes.pem ``` @@ -350,9 +324,7 @@ The Kubernetes Controller Manager leverages a key pair to generate and sign serv Generate the `service-account` certificate and private key: -``` -{ - +```sh cat > service-account-csr.json < encryption-config.yaml < Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`. @@ -108,7 +102,7 @@ EOF List the etcd cluster members: -``` +```sh sudo ETCDCTL_API=3 etcdctl member list \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/etcd/ca.pem \ @@ -118,7 +112,7 @@ sudo ETCDCTL_API=3 etcdctl member list \ > output -``` +```sh 3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379 f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379 ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379 diff --git a/docs/08-bootstrapping-kubernetes-controllers.md b/docs/08-bootstrapping-kubernetes-controllers.md index 3d0cbca..040cad4 100644 --- a/docs/08-bootstrapping-kubernetes-controllers.md +++ b/docs/08-bootstrapping-kubernetes-controllers.md @@ -6,7 +6,7 @@ In this lab you will bootstrap the Kubernetes control plane across three compute The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: -``` +```sh gcloud compute ssh controller-0 ``` @@ -18,7 +18,7 @@ gcloud compute ssh controller-0 Create the Kubernetes configuration directory: -``` +```sh sudo mkdir -p /etc/kubernetes/config ``` @@ -26,7 +26,7 @@ sudo mkdir -p /etc/kubernetes/config Download the official Kubernetes release binaries: -``` +```sh wget -q --show-progress --https-only --timestamping \ "https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-apiserver" \ "https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-controller-manager" \ @@ -36,35 +36,31 @@ wget -q --show-progress --https-only --timestamping \ Install the Kubernetes binaries: -``` -{ - chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl - sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ -} +```sh +chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl +sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ ``` ### Configure the Kubernetes API Server -``` -{ - sudo mkdir -p /var/lib/kubernetes/ +```sh +sudo mkdir -p /var/lib/kubernetes/ - sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ - service-account-key.pem service-account.pem \ - encryption-config.yaml /var/lib/kubernetes/ -} +sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ + service-account-key.pem service-account.pem \ + encryption-config.yaml /var/lib/kubernetes/ ``` The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance: -``` +```sh INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) ``` Create the `kube-apiserver.service` systemd unit file: -``` +```sh cat < Allow up to 10 seconds for the Kubernetes API Server to fully initialize. @@ -207,12 +201,12 @@ A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-bala Install a basic web server to handle HTTP health checks: -``` +```sh sudo apt-get update sudo apt-get install -y nginx ``` -``` +```sh cat > kubernetes.default.svc.cluster.local < The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**. Retrieve the `kubernetes-the-hard-way` static IP address: -``` +```sh KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') @@ -393,24 +383,22 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-har Make a HTTP request for the Kubernetes version info: -``` +```sh curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version ``` > output -``` -{ - "major": "1", - "minor": "15", - "gitVersion": "v1.15.3", - "gitCommit": "2d3c76f9091b6bec110a5e63777c332469e0cba2", - "gitTreeState": "clean", - "buildDate": "2019-08-19T11:05:50Z", - "goVersion": "go1.12.9", - "compiler": "gc", - "platform": "linux/amd64" -} +```sh +"major": "1", +"minor": "15", +"gitVersion": "v1.15.3", +"gitCommit": "2d3c76f9091b6bec110a5e63777c332469e0cba2", +"gitTreeState": "clean", +"buildDate": "2019-08-19T11:05:50Z", +"goVersion": "go1.12.9", +"compiler": "gc", +"platform": "linux/amd64" ``` Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md) diff --git a/docs/09-bootstrapping-kubernetes-workers.md b/docs/09-bootstrapping-kubernetes-workers.md index 6dd752d..5ad5899 100644 --- a/docs/09-bootstrapping-kubernetes-workers.md +++ b/docs/09-bootstrapping-kubernetes-workers.md @@ -6,7 +6,7 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: -``` +```sh gcloud compute ssh worker-0 ``` @@ -18,11 +18,9 @@ gcloud compute ssh worker-0 Install the OS dependencies: -``` -{ - sudo apt-get update - sudo apt-get -y install socat conntrack ipset -} +```sh +sudo apt-get update +sudo apt-get -y install socat conntrack ipset ``` > The socat binary enables support for the `kubectl port-forward` command. @@ -33,13 +31,13 @@ By default the kubelet will fail to start if [swap](https://help.ubuntu.com/comm Verify if swap is enabled: -``` +```sh sudo swapon --show ``` If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately: -``` +```sh sudo swapoff -a ``` @@ -47,7 +45,7 @@ sudo swapoff -a ### Download and Install Worker Binaries -``` +```sh wget -q --show-progress --https-only --timestamping \ https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \ https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \ @@ -60,7 +58,7 @@ wget -q --show-progress --https-only --timestamping \ Create the installation directories: -``` +```sh sudo mkdir -p \ /etc/cni/net.d \ /opt/cni/bin \ @@ -72,31 +70,29 @@ sudo mkdir -p \ Install the worker binaries: -``` -{ - mkdir containerd - tar -xvf crictl-v1.15.0-linux-amd64.tar.gz - tar -xvf containerd-1.2.9.linux-amd64.tar.gz -C containerd - sudo tar -xvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/ - sudo mv runc.amd64 runc - chmod +x crictl kubectl kube-proxy kubelet runc - sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/ - sudo mv containerd/bin/* /bin/ -} +```sh +mkdir containerd +tar -xvf crictl-v1.15.0-linux-amd64.tar.gz +tar -xvf containerd-1.2.9.linux-amd64.tar.gz -C containerd +sudo tar -xvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/ +sudo mv runc.amd64 runc +chmod +x crictl kubectl kube-proxy kubelet runc +sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/ +sudo mv containerd/bin/* /bin/ ``` ### Configure CNI Networking Retrieve the Pod CIDR range for the current compute instance: -``` +```sh POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) ``` Create the `bridge` network configuration file: -``` +```sh cat < Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`. @@ -296,14 +288,14 @@ EOF List the registered Kubernetes nodes: -``` +```sh gcloud compute ssh controller-0 \ --command "kubectl get nodes --kubeconfig admin.kubeconfig" ``` > output -``` +```sh NAME STATUS ROLES AGE VERSION worker-0 Ready 15s v1.15.3 worker-1 Ready 15s v1.15.3 diff --git a/docs/10-configuring-kubectl.md b/docs/10-configuring-kubectl.md index c64a434..b4447d9 100644 --- a/docs/10-configuring-kubectl.md +++ b/docs/10-configuring-kubectl.md @@ -10,40 +10,38 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high Generate a kubeconfig file suitable for authenticating as the `admin` user: -``` -{ - KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ - --region $(gcloud config get-value compute/region) \ - --format 'value(address)') +```sh +KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ + --region $(gcloud config get-value compute/region) \ + --format 'value(address)') - kubectl config set-cluster kubernetes-the-hard-way \ - --certificate-authority=ca.pem \ - --embed-certs=true \ - --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 +kubectl config set-cluster kubernetes-the-hard-way \ + --certificate-authority=ca.pem \ + --embed-certs=true \ + --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 - kubectl config set-credentials admin \ - --client-certificate=admin.pem \ - --client-key=admin-key.pem +kubectl config set-credentials admin \ + --client-certificate=admin.pem \ + --client-key=admin-key.pem - kubectl config set-context kubernetes-the-hard-way \ - --cluster=kubernetes-the-hard-way \ - --user=admin +kubectl config set-context kubernetes-the-hard-way \ + --cluster=kubernetes-the-hard-way \ + --user=admin - kubectl config use-context kubernetes-the-hard-way -} +kubectl config use-context kubernetes-the-hard-way ``` ## Verification Check the health of the remote Kubernetes cluster: -``` +```sh kubectl get componentstatuses ``` > output -``` +```sh NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok @@ -54,13 +52,13 @@ etcd-0 Healthy {"health":"true"} List the nodes in the remote Kubernetes cluster: -``` +```sh kubectl get nodes ``` > output -``` +```sh NAME STATUS ROLES AGE VERSION worker-0 Ready 2m9s v1.15.3 worker-1 Ready 2m9s v1.15.3 diff --git a/docs/11-pod-network-routes.md b/docs/11-pod-network-routes.md index c9f0b6a..e72005f 100644 --- a/docs/11-pod-network-routes.md +++ b/docs/11-pod-network-routes.md @@ -12,7 +12,7 @@ In this section you will gather the information required to create routes in the Print the internal IP address and Pod CIDR range for each worker instance: -``` +```sh for instance in worker-0 worker-1 worker-2; do gcloud compute instances describe ${instance} \ --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' @@ -21,7 +21,7 @@ done > output -``` +```sh 10.240.0.20 10.200.0.0/24 10.240.0.21 10.200.1.0/24 10.240.0.22 10.200.2.0/24 @@ -31,7 +31,7 @@ done Create network routes for each worker instance: -``` +```sh for i in 0 1 2; do gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ --network kubernetes-the-hard-way \ @@ -42,13 +42,13 @@ done List the routes in the `kubernetes-the-hard-way` VPC network: -``` +```sh gcloud compute routes list --filter "network: kubernetes-the-hard-way" ``` > output -``` +```sh NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY default-route-081879136902de56 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000 default-route-55199a5aa126d7aa kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 diff --git a/docs/12-dns-addon.md b/docs/12-dns-addon.md index f7a5d43..b3c60ae 100644 --- a/docs/12-dns-addon.md +++ b/docs/12-dns-addon.md @@ -6,13 +6,13 @@ In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts Deploy the `coredns` cluster add-on: -``` +```sh kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml ``` > output -``` +```sh serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created @@ -23,13 +23,13 @@ service/kube-dns created List the pods created by the `kube-dns` deployment: -``` +```sh kubectl get pods -l k8s-app=kube-dns -n kube-system ``` > output -``` +```sh NAME READY STATUS RESTARTS AGE coredns-699f8ddd77-94qv9 1/1 Running 0 20s coredns-699f8ddd77-gtcgb 1/1 Running 0 20s @@ -39,38 +39,38 @@ coredns-699f8ddd77-gtcgb 1/1 Running 0 20s Create a `busybox` deployment: -``` +```sh kubectl run --generator=run-pod/v1 busybox --image=busybox:1.28 --command -- sleep 3600 ``` List the pod created by the `busybox` deployment: -``` +```sh kubectl get pods -l run=busybox ``` > output -``` +```sh NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 3s ``` Retrieve the full name of the `busybox` pod: -``` +```sh POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") ``` Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod: -``` +```sh kubectl exec -ti $POD_NAME -- nslookup kubernetes ``` > output -``` +```sh Server: 10.32.0.10 Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local diff --git a/docs/13-smoke-test.md b/docs/13-smoke-test.md index ed90844..af2098a 100644 --- a/docs/13-smoke-test.md +++ b/docs/13-smoke-test.md @@ -8,14 +8,14 @@ In this section you will verify the ability to [encrypt secret data at rest](htt Create a generic secret: -``` +```sh kubectl create secret generic kubernetes-the-hard-way \ --from-literal="mykey=mydata" ``` Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: -``` +```sh gcloud compute ssh controller-0 \ --command "sudo ETCDCTL_API=3 etcdctl get \ --endpoints=https://127.0.0.1:2379 \ @@ -27,7 +27,7 @@ gcloud compute ssh controller-0 \ > output -``` +```sh 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret| 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern| 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa| @@ -53,19 +53,19 @@ In this section you will verify the ability to create and manage [Deployments](h Create a deployment for the [nginx](https://nginx.org/en/) web server: -``` +```sh kubectl create deployment nginx --image=nginx ``` List the pod created by the `nginx` deployment: -``` +```sh kubectl get pods -l app=nginx ``` > output -``` +```sh NAME READY STATUS RESTARTS AGE nginx-554b9c67f9-vt5rn 1/1 Running 0 10s ``` @@ -76,32 +76,32 @@ In this section you will verify the ability to access applications remotely usin Retrieve the full name of the `nginx` pod: -``` +```sh POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}") ``` Forward port `8080` on your local machine to port `80` of the `nginx` pod: -``` +```sh kubectl port-forward $POD_NAME 8080:80 ``` > output -``` +```sh Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 ``` In a new terminal make an HTTP request using the forwarding address: -``` +```sh curl --head http://127.0.0.1:8080 ``` > output -``` +```sh HTTP/1.1 200 OK Server: nginx/1.17.3 Date: Sat, 14 Sep 2019 21:10:11 GMT @@ -115,7 +115,7 @@ Accept-Ranges: bytes Switch back to the previous terminal and stop the port forwarding to the `nginx` pod: -``` +```sh Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 Handling connection for 8080 @@ -128,13 +128,13 @@ In this section you will verify the ability to [retrieve container logs](https:/ Print the `nginx` pod logs: -``` +```sh kubectl logs $POD_NAME ``` > output -``` +```sh 127.0.0.1 - - [14/Sep/2019:21:10:11 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.52.1" "-" ``` @@ -144,13 +144,13 @@ In this section you will verify the ability to [execute commands in a container] Print the nginx version by executing the `nginx -v` command in the `nginx` container: -``` +```sh kubectl exec -ti $POD_NAME -- nginx -v ``` > output -``` +```sh nginx version: nginx/1.17.3 ``` @@ -160,7 +160,7 @@ In this section you will verify the ability to expose applications using a [Serv Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service: -``` +```sh kubectl expose deployment nginx --port 80 --type NodePort ``` @@ -168,14 +168,14 @@ kubectl expose deployment nginx --port 80 --type NodePort Retrieve the node port assigned to the `nginx` service: -``` +```sh NODE_PORT=$(kubectl get svc nginx \ --output=jsonpath='{range .spec.ports[0]}{.nodePort}') ``` Create a firewall rule that allows remote access to the `nginx` node port: -``` +```sh gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ --allow=tcp:${NODE_PORT} \ --network kubernetes-the-hard-way @@ -183,20 +183,20 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service Retrieve the external IP address of a worker instance: -``` +```sh EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') ``` Make an HTTP request using the external IP address and the `nginx` node port: -``` +```sh curl -I http://${EXTERNAL_IP}:${NODE_PORT} ``` > output -``` +```sh HTTP/1.1 200 OK Server: nginx/1.17.3 Date: Sat, 14 Sep 2019 21:12:35 GMT diff --git a/docs/14-cleanup.md b/docs/14-cleanup.md index 07be407..4fa9036 100644 --- a/docs/14-cleanup.md +++ b/docs/14-cleanup.md @@ -6,7 +6,7 @@ In this lab you will delete the compute resources created during this tutorial. Delete the controller and worker compute instances: -``` +```sh gcloud -q compute instances delete \ controller-0 controller-1 controller-2 \ worker-0 worker-1 worker-2 \ @@ -17,22 +17,20 @@ gcloud -q compute instances delete \ Delete the external load balancer network resources: -``` -{ - gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ - --region $(gcloud config get-value compute/region) +```sh +gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ + --region $(gcloud config get-value compute/region) - gcloud -q compute target-pools delete kubernetes-target-pool +gcloud -q compute target-pools delete kubernetes-target-pool - gcloud -q compute http-health-checks delete kubernetes +gcloud -q compute http-health-checks delete kubernetes - gcloud -q compute addresses delete kubernetes-the-hard-way -} +gcloud -q compute addresses delete kubernetes-the-hard-way ``` Delete the `kubernetes-the-hard-way` firewall rules: -``` +```sh gcloud -q compute firewall-rules delete \ kubernetes-the-hard-way-allow-nginx-service \ kubernetes-the-hard-way-allow-internal \ @@ -42,15 +40,13 @@ gcloud -q compute firewall-rules delete \ Delete the `kubernetes-the-hard-way` network VPC: -``` -{ - gcloud -q compute routes delete \ - kubernetes-route-10-200-0-0-24 \ - kubernetes-route-10-200-1-0-24 \ - kubernetes-route-10-200-2-0-24 +```sh +gcloud -q compute routes delete \ + kubernetes-route-10-200-0-0-24 \ + kubernetes-route-10-200-1-0-24 \ + kubernetes-route-10-200-2-0-24 - gcloud -q compute networks subnets delete kubernetes +gcloud -q compute networks subnets delete kubernetes - gcloud -q compute networks delete kubernetes-the-hard-way -} +gcloud -q compute networks delete kubernetes-the-hard-way ```