From 50163592a90b92ea60476d4e193c5fe643fe4636 Mon Sep 17 00:00:00 2001 From: Bright Zheng Date: Sat, 3 Aug 2019 12:03:53 +0800 Subject: [PATCH] Update to Kubernetes v1.14.4 with necessary fixes --- README.md | 15 +- docs/01-prerequisites.md | 17 +- docs/02-client-tools.md | 47 +++--- docs/03-compute-resources.md | 26 +-- docs/04-certificate-authority.md | 153 ++++++++++-------- docs/05-kubernetes-configuration-files.md | 29 ++-- docs/06-data-encryption-keys.md | 4 +- docs/07-bootstrapping-etcd.md | 32 ++-- ...08-bootstrapping-kubernetes-controllers.md | 84 +++++----- docs/09-bootstrapping-kubernetes-workers.md | 65 ++++---- docs/10-configuring-kubectl.md | 14 +- docs/11-pod-network-routes.md | 6 +- docs/12-dns-addon.md | 10 +- docs/13-smoke-test.md | 92 ++++++----- docs/14-cleanup.md | 6 +- 15 files changed, 329 insertions(+), 271 deletions(-) diff --git a/README.md b/README.md index fae7a56..3abeb73 100644 --- a/README.md +++ b/README.md @@ -14,12 +14,15 @@ The target audience for this tutorial is someone planning to support a productio Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication. -* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.12.0 -* [containerd Container Runtime](https://github.com/containerd/containerd) 1.2.0-rc.0 -* [gVisor](https://github.com/google/gvisor) 50c283b9f56bb7200938d9e207355f05f79f0d17 -* [CNI Container Networking](https://github.com/containernetworking/cni) 0.6.0 -* [etcd](https://github.com/coreos/etcd) v3.3.9 -* [CoreDNS](https://github.com/coredns/coredns) v1.2.2 +* [Kubernetes](https://github.com/kubernetes/kubernetes) v1.14.4 +* [containerd Container Runtime](https://github.com/containerd/containerd) 1.2.7 +* [gVisor](https://github.com/google/gvisor) release-20190529.1 +* [CNI Container Networking](https://github.com/containernetworking/cni) 0.7.1 +* [etcd](https://github.com/coreos/etcd) v3.3.13 +* [CoreDNS](https://github.com/coredns/coredns) v1.5.2 +* [cri-tools](https://github.com/kubernetes-sigs/cri-tools) v1.15.0 +* [runc](https://github.com/opencontainers/runc) v1.0.0-rc8 +* [CNI plugins](https://github.com/containernetworking/plugins) v0.8.1 ## Labs diff --git a/docs/01-prerequisites.md b/docs/01-prerequisites.md index eacf09f..8d0a363 100644 --- a/docs/01-prerequisites.md +++ b/docs/01-prerequisites.md @@ -16,29 +16,38 @@ Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to in Verify the Google Cloud SDK version is 218.0.0 or higher: -``` +```sh gcloud version ``` +> output + +``` +Google Cloud SDK 241.0.0 +bq 2.0.43 +core 2019.04.02 +gsutil 4.38 +``` + ### Set a Default Compute Region and Zone This tutorial assumes a default compute region and zone have been configured. If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this: -``` +```sh gcloud init ``` Otherwise set a default compute region: -``` +```sh gcloud config set compute/region us-west1 ``` Set a default compute zone: -``` +```sh gcloud config set compute/zone us-west1-c ``` diff --git a/docs/02-client-tools.md b/docs/02-client-tools.md index f4ef130..76039ff 100644 --- a/docs/02-client-tools.md +++ b/docs/02-client-tools.md @@ -11,42 +11,42 @@ Download and install `cfssl` and `cfssljson` from the [cfssl repository](https:/ ### OS X -``` +```sh curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64 curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64 ``` -``` +```sh chmod +x cfssl cfssljson ``` -``` +```sh sudo mv cfssl cfssljson /usr/local/bin/ ``` Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option: -``` +```sh brew install cfssl ``` ### Linux -``` +```sh wget -q --show-progress --https-only --timestamping \ https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \ https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 ``` -``` +```sh chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 ``` -``` +```sh sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl ``` -``` +```sh sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson ``` @@ -54,64 +54,65 @@ sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson Verify `cfssl` version 1.2.0 or higher is installed: -``` +```sh cfssl version ``` > output ``` -Version: 1.2.0 +Version: 1.3.4 Revision: dev -Runtime: go1.6 +Runtime: go1.12.7 ``` > The cfssljson command line utility does not provide a way to print its version. ## Install kubectl -The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries: +The `kubectl` command line utility is used to interact with the Kubernetes API Server. +Download and install latest stable `kubectl` from the official release binaries: ### OS X -``` -curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/darwin/amd64/kubectl +```sh +curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl" ``` -``` +```sh chmod +x kubectl ``` -``` +```sh sudo mv kubectl /usr/local/bin/ ``` ### Linux -``` -wget https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl +```sh +curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" ``` -``` +```sh chmod +x kubectl ``` -``` +```sh sudo mv kubectl /usr/local/bin/ ``` ### Verification -Verify `kubectl` version 1.12.0 or higher is installed: +Verify `kubectl` version 1.14.0 or higher is installed: -``` +```sh kubectl version --client ``` > output ``` -Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} +Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"} ``` Next: [Provisioning Compute Resources](03-compute-resources.md) diff --git a/docs/03-compute-resources.md b/docs/03-compute-resources.md index bd92c3c..4b79555 100644 --- a/docs/03-compute-resources.md +++ b/docs/03-compute-resources.md @@ -16,7 +16,7 @@ In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/com Create the `kubernetes-the-hard-way` custom VPC network: -``` +```sh gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom ``` @@ -24,7 +24,7 @@ A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: -``` +```sh gcloud compute networks subnets create kubernetes \ --network kubernetes-the-hard-way \ --range 10.240.0.0/24 @@ -36,7 +36,7 @@ gcloud compute networks subnets create kubernetes \ Create a firewall rule that allows internal communication across all protocols: -``` +```sh gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --allow tcp,udp,icmp \ --network kubernetes-the-hard-way \ @@ -45,7 +45,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ Create a firewall rule that allows external SSH, ICMP, and HTTPS: -``` +```sh gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ --allow tcp:22,tcp:6443,icmp \ --network kubernetes-the-hard-way \ @@ -56,7 +56,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ List the firewall rules in the `kubernetes-the-hard-way` VPC network: -``` +```sh gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" ``` @@ -72,14 +72,14 @@ kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers: -``` +```sh gcloud compute addresses create kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) ``` Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region: -``` +```sh gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" ``` @@ -98,7 +98,7 @@ The compute instances in this lab will be provisioned using [Ubuntu Server](http Create three compute instances which will host the Kubernetes control plane: -``` +```sh for i in 0 1 2; do gcloud compute instances create controller-${i} \ --async \ @@ -122,7 +122,7 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste Create three compute instances which will host the Kubernetes worker nodes: -``` +```sh for i in 0 1 2; do gcloud compute instances create worker-${i} \ --async \ @@ -143,7 +143,7 @@ done List the compute instances in your default compute zone: -``` +```sh gcloud compute instances list ``` @@ -165,7 +165,7 @@ SSH will be used to configure the controller and worker instances. When connecti Test SSH access to the `controller-0` compute instances: -``` +```sh gcloud compute ssh controller-0 ``` @@ -217,12 +217,12 @@ Last login: Sun May 13 14:34:27 2018 from XX.XXX.XXX.XX Type `exit` at the prompt to exit the `controller-0` compute instance: -``` +```sh $USER@controller-0:~$ exit ``` > output -``` +```sh logout Connection to XX.XXX.XXX.XXX closed ``` diff --git a/docs/04-certificate-authority.md b/docs/04-certificate-authority.md index f8842d9..b705904 100644 --- a/docs/04-certificate-authority.md +++ b/docs/04-certificate-authority.md @@ -8,7 +8,7 @@ In this section you will provision a Certificate Authority that can be used to g Generate the CA configuration file, certificate, and private key: -``` +```sh { cat > ca-config.json < admin-csr.json <`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements. +Kubernetes uses a **special-purpose authorization mode** called [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/), that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements. Generate a certificate and private key for each Kubernetes worker node: -``` +```sh for instance in worker-0 worker-1 worker-2; do -cat > ${instance}-csr.json < ${instance}-csr.json < ${instance}-csr.json < kube-controller-manager-csr.json < kube-controller-manager-csr.json < kube-controller-manager-csr.json < kube-proxy-csr.json < kube-proxy-csr.json < kube-proxy-csr.json < kube-scheduler-csr.json < kube-scheduler-csr.json < kube-scheduler-csr.json < kubernetes-csr.json < kubernetes-csr.json < kubernetes-csr.json < service-account-csr.json < service-account-csr.json < service-account-csr.json < encryption-config.yaml < kubernetes.default.svc.cluster.local < This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization. -``` +```sh gcloud compute ssh controller-0 ``` Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: -``` +```sh cat < output @@ -397,12 +399,12 @@ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version ``` { "major": "1", - "minor": "12", - "gitVersion": "v1.12.0", - "gitCommit": "0ed33881dc4355495f623c6f22e7dd0b7632b7c0", + "minor": "14", + "gitVersion": "v1.14.4", + "gitCommit": "a87e9a978f65a8303aa9467537aa59c18122cbf9", "gitTreeState": "clean", - "buildDate": "2018-09-27T16:55:41Z", - "goVersion": "go1.10.4", + "buildDate": "2019-07-08T08:43:10Z", + "goVersion": "go1.12.5", "compiler": "gc", "platform": "linux/amd64" } diff --git a/docs/09-bootstrapping-kubernetes-workers.md b/docs/09-bootstrapping-kubernetes-workers.md index bec4960..9c1ad05 100644 --- a/docs/09-bootstrapping-kubernetes-workers.md +++ b/docs/09-bootstrapping-kubernetes-workers.md @@ -6,7 +6,7 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: -``` +```sh gcloud compute ssh worker-0 ``` @@ -18,7 +18,7 @@ gcloud compute ssh worker-0 Install the OS dependencies: -``` +```sh { sudo apt-get update sudo apt-get -y install socat conntrack ipset @@ -29,21 +29,21 @@ Install the OS dependencies: ### Download and Install Worker Binaries -``` +```sh wget -q --show-progress --https-only --timestamping \ - https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \ + https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \ https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \ - https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \ - https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \ - https://github.com/containerd/containerd/releases/download/v1.2.0-rc.0/containerd-1.2.0-rc.0.linux-amd64.tar.gz \ - https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \ - https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \ - https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet + https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \ + https://github.com/containernetworking/plugins/releases/download/v0.8.1/cni-plugins-linux-amd64-v0.8.1.tgz \ + https://github.com/containerd/containerd/releases/download/v1.2.7/containerd-1.2.7.linux-amd64.tar.gz \ + https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kubectl \ + https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kube-proxy \ + https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kubelet ``` Create the installation directories: -``` +```sh sudo mkdir -p \ /etc/cni/net.d \ /opt/cni/bin \ @@ -55,15 +55,15 @@ sudo mkdir -p \ Install the worker binaries: -``` +```sh { sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc sudo mv runc.amd64 runc chmod +x kubectl kube-proxy kubelet runc runsc sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/ - sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/ - sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ - sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C / + sudo tar -xvf crictl-v1.15.0-linux-amd64.tar.gz -C /usr/local/bin/ + sudo tar -xvf cni-plugins-linux-amd64-v0.8.1.tgz -C /opt/cni/bin/ + sudo tar -xvf containerd-1.2.7.linux-amd64.tar.gz -C / } ``` @@ -71,14 +71,15 @@ Install the worker binaries: Retrieve the Pod CIDR range for the current compute instance: -``` +```sh POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) +echo $POD_CIDR ``` Create the `bridge` network configuration file: -``` +```sh cat < 35s v1.12.0 -worker-1 Ready 36s v1.12.0 -worker-2 Ready 36s v1.12.0 +worker-0 Ready 94s v1.14.4 +worker-1 Ready 93s v1.14.4 +worker-2 Ready 92s v1.14.4 ``` Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md) diff --git a/docs/10-configuring-kubectl.md b/docs/10-configuring-kubectl.md index 8ac0026..e496046 100644 --- a/docs/10-configuring-kubectl.md +++ b/docs/10-configuring-kubectl.md @@ -10,7 +10,7 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high Generate a kubeconfig file suitable for authenticating as the `admin` user: -``` +```sh { KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ @@ -37,7 +37,7 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user: Check the health of the remote Kubernetes cluster: -``` +```sh kubectl get componentstatuses ``` @@ -54,17 +54,17 @@ etcd-0 Healthy {"health":"true"} List the nodes in the remote Kubernetes cluster: -``` +```sh kubectl get nodes ``` > output ``` -NAME STATUS ROLES AGE VERSION -worker-0 Ready 117s v1.12.0 -worker-1 Ready 118s v1.12.0 -worker-2 Ready 118s v1.12.0 +NAME STATUS ROLES AGE VERSION +worker-0 Ready 3m59s v1.14.4 +worker-1 Ready 3m58s v1.14.4 +worker-2 Ready 3m57s v1.14.4 ``` Next: [Provisioning Pod Network Routes](11-pod-network-routes.md) diff --git a/docs/11-pod-network-routes.md b/docs/11-pod-network-routes.md index c9f0b6a..9fac7d1 100644 --- a/docs/11-pod-network-routes.md +++ b/docs/11-pod-network-routes.md @@ -12,7 +12,7 @@ In this section you will gather the information required to create routes in the Print the internal IP address and Pod CIDR range for each worker instance: -``` +```sh for instance in worker-0 worker-1 worker-2; do gcloud compute instances describe ${instance} \ --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' @@ -31,7 +31,7 @@ done Create network routes for each worker instance: -``` +```sh for i in 0 1 2; do gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ --network kubernetes-the-hard-way \ @@ -42,7 +42,7 @@ done List the routes in the `kubernetes-the-hard-way` VPC network: -``` +```sh gcloud compute routes list --filter "network: kubernetes-the-hard-way" ``` diff --git a/docs/12-dns-addon.md b/docs/12-dns-addon.md index 67c5e5b..ec6055f 100644 --- a/docs/12-dns-addon.md +++ b/docs/12-dns-addon.md @@ -6,7 +6,7 @@ In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts Deploy the `coredns` cluster add-on: -``` +```sh kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml ``` @@ -23,7 +23,7 @@ service/kube-dns created List the pods created by the `kube-dns` deployment: -``` +```sh kubectl get pods -l k8s-app=kube-dns -n kube-system ``` @@ -39,13 +39,13 @@ coredns-699f8ddd77-gtcgb 1/1 Running 0 20s Create a `busybox` deployment: -``` +```sh kubectl run busybox --image=busybox:1.28 --command -- sleep 3600 ``` List the pod created by the `busybox` deployment: -``` +```sh kubectl get pods -l run=busybox ``` @@ -64,7 +64,7 @@ POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod: -``` +```sh kubectl exec -ti $POD_NAME -- nslookup kubernetes ``` diff --git a/docs/13-smoke-test.md b/docs/13-smoke-test.md index f302909..873bf24 100644 --- a/docs/13-smoke-test.md +++ b/docs/13-smoke-test.md @@ -8,14 +8,14 @@ In this section you will verify the ability to [encrypt secret data at rest](htt Create a generic secret: -``` +```sh kubectl create secret generic kubernetes-the-hard-way \ --from-literal="mykey=mydata" ``` Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: -``` +```sh gcloud compute ssh controller-0 \ --command "sudo ETCDCTL_API=3 etcdctl get \ --endpoints=https://127.0.0.1:2379 \ @@ -54,13 +54,13 @@ In this section you will verify the ability to create and manage [Deployments](h Create a deployment for the [nginx](https://nginx.org/en/) web server: -``` +```sh kubectl run nginx --image=nginx ``` List the pod created by the `nginx` deployment: -``` +```sh kubectl get pods -l run=nginx ``` @@ -77,13 +77,13 @@ In this section you will verify the ability to access applications remotely usin Retrieve the full name of the `nginx` pod: -``` +```sh POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}") ``` Forward port `8080` on your local machine to port `80` of the `nginx` pod: -``` +```sh kubectl port-forward $POD_NAME 8080:80 ``` @@ -104,13 +104,13 @@ curl --head http://127.0.0.1:8080 ``` HTTP/1.1 200 OK -Server: nginx/1.15.4 -Date: Sun, 30 Sep 2018 19:23:10 GMT +Server: nginx/1.17.2 +Date: Sat, 03 Aug 2019 03:35:08 GMT Content-Type: text/html Content-Length: 612 -Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT +Last-Modified: Tue, 23 Jul 2019 11:45:37 GMT Connection: keep-alive -ETag: "5baa4e63-264" +ETag: "5d36f361-264" Accept-Ranges: bytes ``` @@ -129,14 +129,14 @@ In this section you will verify the ability to [retrieve container logs](https:/ Print the `nginx` pod logs: -``` +```sh kubectl logs $POD_NAME ``` > output ``` -127.0.0.1 - - [30/Sep/2018:19:23:10 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-" +127.0.0.1 - - [03/Aug/2019:03:35:08 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-" ``` ### Exec @@ -152,7 +152,7 @@ kubectl exec -ti $POD_NAME -- nginx -v > output ``` -nginx version: nginx/1.15.4 +nginx version: nginx/1.17.2 ``` ## Services @@ -161,7 +161,7 @@ In this section you will verify the ability to expose applications using a [Serv Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service: -``` +```sh kubectl expose deployment nginx --port 80 --type NodePort ``` @@ -169,14 +169,21 @@ kubectl expose deployment nginx --port 80 --type NodePort Retrieve the node port assigned to the `nginx` service: -``` +```sh NODE_PORT=$(kubectl get svc nginx \ --output=jsonpath='{range .spec.ports[0]}{.nodePort}') +echo $NODE_PORT +``` + +> output + +``` +30313 ``` Create a firewall rule that allows remote access to the `nginx` node port: -``` +```sh gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ --allow=tcp:${NODE_PORT} \ --network kubernetes-the-hard-way @@ -184,28 +191,28 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service Retrieve the external IP address of a worker instance: -``` +```sh EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') ``` Make an HTTP request using the external IP address and the `nginx` node port: -``` -curl -I http://${EXTERNAL_IP}:${NODE_PORT} +```sh +curl -I "http://${EXTERNAL_IP}:${NODE_PORT}" ``` > output ``` HTTP/1.1 200 OK -Server: nginx/1.15.4 -Date: Sun, 30 Sep 2018 19:25:40 GMT +Server: nginx/1.17.2 +Date: Sat, 03 Aug 2019 03:43:19 GMT Content-Type: text/html Content-Length: 612 -Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT +Last-Modified: Tue, 23 Jul 2019 11:45:37 GMT Connection: keep-alive -ETag: "5baa4e63-264" +ETag: "5d36f361-264" Accept-Ranges: bytes ``` @@ -215,7 +222,7 @@ This section will verify the ability to run untrusted workloads using [gVisor](h Create the `untrusted` pod: -``` +```sh cat < output + +``` +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +pod/busybox-68f7d47fc6-fnzlp 1/1 Running 0 18m 10.200.1.2 worker-1 +pod/nginx 1/1 Running 0 11m 10.200.1.3 worker-1 +pod/untrusted 1/1 Running 0 90s 10.200.0.3 worker-0 + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR +service/kubernetes ClusterIP 10.32.0.1 443/TCP 7h20m +service/nginx NodePort 10.32.0.147 80:31209/TCP 7m30s run=nginx +``` Get the node name where the `untrusted` pod is running: -``` +```sh INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}') ``` SSH into the worker node: -``` +```sh gcloud compute ssh ${INSTANCE_NAME} ``` List the containers running under gVisor: -``` +```sh sudo runsc --root /run/containerd/runsc/k8s.io list ``` + +> output + ``` I0930 19:27:13.255142 20832 x:0] *************************** I0930 19:27:13.255326 20832 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list] @@ -285,21 +301,21 @@ I0930 19:27:13.259733 20832 x:0] Exiting with status: 0 Get the ID of the `untrusted` pod: -``` +```sh POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \ pods --name untrusted -q) ``` Get the ID of the `webserver` container running in the `untrusted` pod: -``` +```sh CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \ ps -p ${POD_ID} -q) ``` Use the gVisor `runsc` command to display the processes running inside the `webserver` container: -``` +```sh sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID} ``` diff --git a/docs/14-cleanup.md b/docs/14-cleanup.md index dc97a3a..96b0d5a 100644 --- a/docs/14-cleanup.md +++ b/docs/14-cleanup.md @@ -6,7 +6,7 @@ In this lab you will delete the compute resources created during this tutorial. Delete the controller and worker compute instances: -``` +```sh gcloud -q compute instances delete \ controller-0 controller-1 controller-2 \ worker-0 worker-1 worker-2 @@ -16,7 +16,7 @@ gcloud -q compute instances delete \ Delete the external load balancer network resources: -``` +```sh { gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ --region $(gcloud config get-value compute/region) @@ -31,7 +31,7 @@ Delete the external load balancer network resources: Delete the `kubernetes-the-hard-way` firewall rules: -``` +```sh gcloud -q compute firewall-rules delete \ kubernetes-the-hard-way-allow-nginx-service \ kubernetes-the-hard-way-allow-internal \