diff --git a/docs/01-prerequisites.md b/docs/01-prerequisites.md index eacf09f..89a557f 100644 --- a/docs/01-prerequisites.md +++ b/docs/01-prerequisites.md @@ -16,9 +16,9 @@ Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to in Verify the Google Cloud SDK version is 218.0.0 or higher: -``` +~~~sh gcloud version -``` +~~~ ### Set a Default Compute Region and Zone @@ -26,21 +26,21 @@ This tutorial assumes a default compute region and zone have been configured. If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this: -``` +~~~sh gcloud init -``` +~~~ Otherwise set a default compute region: -``` +~~~sh gcloud config set compute/region us-west1 -``` +~~~ Set a default compute zone: -``` +~~~sh gcloud config set compute/zone us-west1-c -``` +~~~ > Use the `gcloud compute zones list` command to view additional regions and zones. diff --git a/docs/02-client-tools.md b/docs/02-client-tools.md index f4ef130..8fe190a 100644 --- a/docs/02-client-tools.md +++ b/docs/02-client-tools.md @@ -11,60 +11,60 @@ Download and install `cfssl` and `cfssljson` from the [cfssl repository](https:/ ### OS X -``` +~~~sh curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64 curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64 -``` +~~~ -``` +~~~sh chmod +x cfssl cfssljson -``` +~~~ -``` +~~~sh sudo mv cfssl cfssljson /usr/local/bin/ -``` +~~~ Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option: -``` +~~~sh brew install cfssl -``` +~~~ ### Linux -``` +~~~sh wget -q --show-progress --https-only --timestamping \ https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \ https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -``` +~~~ -``` +~~~sh chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 -``` +~~~ -``` +~~~sh sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl -``` +~~~ -``` +~~~sh sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson -``` +~~~ ### Verification Verify `cfssl` version 1.2.0 or higher is installed: -``` +~~~sh cfssl version -``` +~~~ > output -``` +~~~ Version: 1.2.0 Revision: dev Runtime: go1.6 -``` +~~~ > The cfssljson command line utility does not provide a way to print its version. @@ -74,44 +74,44 @@ The `kubectl` command line utility is used to interact with the Kubernetes API S ### OS X -``` +~~~sh curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/darwin/amd64/kubectl -``` +~~~ -``` +~~~sh chmod +x kubectl -``` +~~~ -``` +~~~sh sudo mv kubectl /usr/local/bin/ -``` +~~~ ### Linux -``` +~~~sh wget https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl -``` +~~~ -``` +~~~sh chmod +x kubectl -``` +~~~ -``` +~~~sh sudo mv kubectl /usr/local/bin/ -``` +~~~ ### Verification Verify `kubectl` version 1.12.0 or higher is installed: -``` +~~~sh kubectl version --client -``` +~~~ > output -``` +~~~ Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} -``` +~~~ Next: [Provisioning Compute Resources](03-compute-resources.md) diff --git a/docs/03-compute-resources.md b/docs/03-compute-resources.md index bd92c3c..db755b2 100644 --- a/docs/03-compute-resources.md +++ b/docs/03-compute-resources.md @@ -16,19 +16,19 @@ In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/com Create the `kubernetes-the-hard-way` custom VPC network: -``` +~~~sh gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom -``` +~~~ A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster. Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: -``` +~~~sh gcloud compute networks subnets create kubernetes \ --network kubernetes-the-hard-way \ --range 10.240.0.0/24 -``` +~~~ > The `10.240.0.0/24` IP address range can host up to 254 compute instances. @@ -36,59 +36,59 @@ gcloud compute networks subnets create kubernetes \ Create a firewall rule that allows internal communication across all protocols: -``` +~~~sh gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --allow tcp,udp,icmp \ --network kubernetes-the-hard-way \ --source-ranges 10.240.0.0/24,10.200.0.0/16 -``` +~~~ Create a firewall rule that allows external SSH, ICMP, and HTTPS: -``` +~~~sh gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ --allow tcp:22,tcp:6443,icmp \ --network kubernetes-the-hard-way \ --source-ranges 0.0.0.0/0 -``` +~~~ > An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients. List the firewall rules in the `kubernetes-the-hard-way` VPC network: -``` +~~~sh gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" -``` +~~~ > output -``` +~~~ NAME NETWORK DIRECTION PRIORITY ALLOW DENY kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp -``` +~~~ ### Kubernetes Public IP Address Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers: -``` +~~~sh gcloud compute addresses create kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) -``` +~~~ Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region: -``` +~~~sh gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" -``` +~~~ > output -``` +~~~ NAME REGION ADDRESS STATUS kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED -``` +~~~ ## Compute Instances @@ -98,7 +98,7 @@ The compute instances in this lab will be provisioned using [Ubuntu Server](http Create three compute instances which will host the Kubernetes control plane: -``` +~~~sh for i in 0 1 2; do gcloud compute instances create controller-${i} \ --async \ @@ -112,7 +112,7 @@ for i in 0 1 2; do --subnet kubernetes \ --tags kubernetes-the-hard-way,controller done -``` +~~~ ### Kubernetes Workers @@ -122,7 +122,7 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste Create three compute instances which will host the Kubernetes worker nodes: -``` +~~~sh for i in 0 1 2; do gcloud compute instances create worker-${i} \ --async \ @@ -137,19 +137,19 @@ for i in 0 1 2; do --subnet kubernetes \ --tags kubernetes-the-hard-way,worker done -``` +~~~ ### Verification List the compute instances in your default compute zone: -``` +~~~sh gcloud compute instances list -``` +~~~ > output -``` +~~~ NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING @@ -157,7 +157,7 @@ controller-2 us-west1-c n1-standard-1 10.240.0.12 XX.XXX.XXX.XX worker-0 us-west1-c n1-standard-1 10.240.0.20 XXX.XXX.XXX.XX RUNNING worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX RUNNING worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING -``` +~~~ ## Configuring SSH Access @@ -165,13 +165,13 @@ SSH will be used to configure the controller and worker instances. When connecti Test SSH access to the `controller-0` compute instances: -``` +~~~sh gcloud compute ssh controller-0 -``` +~~~ If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue: -``` +~~~ WARNING: The public SSH key file for gcloud does not exist. WARNING: The private SSH key file for gcloud does not exist. WARNING: You do not have an SSH key for gcloud. @@ -179,11 +179,11 @@ WARNING: SSH keygen will be executed to generate a key. Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: -``` +~~~ At this point the generated SSH keys will be uploaded and stored in your project: -``` +~~~ Your identification has been saved in /home/$USER/.ssh/google_compute_engine. Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub. The key fingerprint is: @@ -203,28 +203,29 @@ The key's randomart image is: Updating project ssh metadata...-Updated [https://www.googleapis.com/compute/v1/projects/$PROJECT_ID]. Updating project ssh metadata...done. Waiting for SSH key to propagate. -``` +~~~ After the SSH keys have been updated you'll be logged into the `controller-0` instance: -``` +~~~ Welcome to Ubuntu 18.04 LTS (GNU/Linux 4.15.0-1006-gcp x86_64) ... Last login: Sun May 13 14:34:27 2018 from XX.XXX.XXX.XX -``` +~~~ Type `exit` at the prompt to exit the `controller-0` compute instance: -``` +~~~sh $USER@controller-0:~$ exit -``` +~~~ + > output -``` +~~~ logout Connection to XX.XXX.XXX.XXX closed -``` +~~~ Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md) diff --git a/docs/04-certificate-authority.md b/docs/04-certificate-authority.md index f8842d9..2671202 100644 --- a/docs/04-certificate-authority.md +++ b/docs/04-certificate-authority.md @@ -8,9 +8,7 @@ In this section you will provision a Certificate Authority that can be used to g Generate the CA configuration file, certificate, and private key: -``` -{ - +~~~sh cat > ca-config.json < ca-csr.json < admin-csr.json < ${instance}-csr.json < kube-controller-manager-csr.json < kube-proxy-csr.json < kube-scheduler-csr.json < service-account-csr.json < The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab. diff --git a/docs/05-kubernetes-configuration-files.md b/docs/05-kubernetes-configuration-files.md index e8ddf9d..db058c2 100644 --- a/docs/05-kubernetes-configuration-files.md +++ b/docs/05-kubernetes-configuration-files.md @@ -12,11 +12,11 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high Retrieve the `kubernetes-the-hard-way` static IP address: -``` +~~~sh KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') -``` +~~~ ### The kubelet Kubernetes Configuration File @@ -24,7 +24,7 @@ When generating kubeconfig files for Kubelets the client certificate matching th Generate a kubeconfig file for each worker node: -``` +~~~sh for instance in worker-0 worker-1 worker-2; do kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ @@ -45,22 +45,21 @@ for instance in worker-0 worker-1 worker-2; do kubectl config use-context default --kubeconfig=${instance}.kubeconfig done -``` +~~~ Results: -``` +~~~ worker-0.kubeconfig worker-1.kubeconfig worker-2.kubeconfig -``` +~~~ ### The kube-proxy Kubernetes Configuration File Generate a kubeconfig file for the `kube-proxy` service: -``` -{ +~~~sh kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ @@ -79,21 +78,19 @@ Generate a kubeconfig file for the `kube-proxy` service: --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig -} -``` +~~~ Results: -``` +~~~ kube-proxy.kubeconfig -``` +~~~ ### The kube-controller-manager Kubernetes Configuration File Generate a kubeconfig file for the `kube-controller-manager` service: -``` -{ +~~~sh kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ @@ -112,22 +109,21 @@ Generate a kubeconfig file for the `kube-controller-manager` service: --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig -} -``` +~~~ Results: -``` +~~~sh kube-controller-manager.kubeconfig -``` +~~~ ### The kube-scheduler Kubernetes Configuration File Generate a kubeconfig file for the `kube-scheduler` service: -``` -{ +~~~sh + kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ @@ -146,21 +142,19 @@ Generate a kubeconfig file for the `kube-scheduler` service: --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig -} -``` +~~~ Results: -``` +~~~sh kube-scheduler.kubeconfig -``` +~~~ ### The admin Kubernetes Configuration File Generate a kubeconfig file for the `admin` user: -``` -{ +~~~sh kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ --embed-certs=true \ @@ -179,14 +173,13 @@ Generate a kubeconfig file for the `admin` user: --kubeconfig=admin.kubeconfig kubectl config use-context default --kubeconfig=admin.kubeconfig -} -``` +~~~ Results: -``` +~~~ admin.kubeconfig -``` +~~~ ## @@ -195,18 +188,18 @@ admin.kubeconfig Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance: -``` +~~~sh for instance in worker-0 worker-1 worker-2; do gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ done -``` +~~~ Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance: -``` +~~~sh for instance in controller-0 controller-1 controller-2; do gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/ done -``` +~~~ Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md) diff --git a/docs/06-data-encryption-keys.md b/docs/06-data-encryption-keys.md index 233bce2..5b1288f 100644 --- a/docs/06-data-encryption-keys.md +++ b/docs/06-data-encryption-keys.md @@ -8,15 +8,15 @@ In this lab you will generate an encryption key and an [encryption config](https Generate an encryption key: -``` +~~~sh ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64) -``` +~~~ ## The Encryption Config File Create the `encryption-config.yaml` encryption config file: -``` +~~~sh cat > encryption-config.yaml < Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`. @@ -107,20 +101,20 @@ EOF List the etcd cluster members: -``` +~~~sh sudo ETCDCTL_API=3 etcdctl member list \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/etcd/ca.pem \ --cert=/etc/etcd/kubernetes.pem \ --key=/etc/etcd/kubernetes-key.pem -``` +~~~ > output -``` +~~~ 3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379 f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379 ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379 -``` +~~~ Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md) diff --git a/docs/08-bootstrapping-kubernetes-controllers.md b/docs/08-bootstrapping-kubernetes-controllers.md index 1c2883b..eb70e8c 100644 --- a/docs/08-bootstrapping-kubernetes-controllers.md +++ b/docs/08-bootstrapping-kubernetes-controllers.md @@ -6,9 +6,9 @@ In this lab you will bootstrap the Kubernetes control plane across three compute The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: -``` +~~~sh gcloud compute ssh controller-0 -``` +~~~ ### Running commands in parallel with tmux @@ -18,53 +18,49 @@ gcloud compute ssh controller-0 Create the Kubernetes configuration directory: -``` +~~~sh sudo mkdir -p /etc/kubernetes/config -``` +~~~ ### Download and Install the Kubernetes Controller Binaries Download the official Kubernetes release binaries: -``` +~~~sh wget -q --show-progress --https-only --timestamping \ "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \ "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \ "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \ "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl" -``` +~~~ Install the Kubernetes binaries: -``` -{ +~~~sh chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ -} -``` +~~~ ### Configure the Kubernetes API Server -``` -{ +~~~sh sudo mkdir -p /var/lib/kubernetes/ sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ service-account-key.pem service-account.pem \ encryption-config.yaml /var/lib/kubernetes/ -} -``` +~~~ The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance: -``` +~~~sh INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) -``` +~~~ Create the `kube-apiserver.service` systemd unit file: -``` +~~~ cat < Allow up to 10 seconds for the Kubernetes API Server to fully initialize. @@ -208,11 +202,11 @@ A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-bala Install a basic web server to handle HTTP health checks: -``` +~~~sh sudo apt-get install -y nginx -``` +~~~ -``` +~~~sh cat > kubernetes.default.svc.cluster.local < Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`. @@ -283,13 +276,13 @@ In this section you will configure RBAC permissions to allow the Kubernetes API > This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization. -``` +~~~sh gcloud compute ssh controller-0 -``` +~~~ Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: -``` +~~~sh cat < output -``` +~~~sh { "major": "1", "minor": "12", @@ -406,6 +397,6 @@ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version "compiler": "gc", "platform": "linux/amd64" } -``` +~~~ Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md) diff --git a/docs/09-bootstrapping-kubernetes-workers.md b/docs/09-bootstrapping-kubernetes-workers.md index bec4960..858d571 100644 --- a/docs/09-bootstrapping-kubernetes-workers.md +++ b/docs/09-bootstrapping-kubernetes-workers.md @@ -6,9 +6,9 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: -``` +~~~sh gcloud compute ssh worker-0 -``` +~~~ ### Running commands in parallel with tmux @@ -18,18 +18,16 @@ gcloud compute ssh worker-0 Install the OS dependencies: -``` -{ +~~~sh sudo apt-get update sudo apt-get -y install socat conntrack ipset -} -``` +~~~ > The socat binary enables support for the `kubectl port-forward` command. ### Download and Install Worker Binaries -``` +~~~sh wget -q --show-progress --https-only --timestamping \ https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \ https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \ @@ -39,11 +37,11 @@ wget -q --show-progress --https-only --timestamping \ https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \ https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \ https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet -``` +~~~ Create the installation directories: -``` +~~~sh sudo mkdir -p \ /etc/cni/net.d \ /opt/cni/bin \ @@ -51,12 +49,11 @@ sudo mkdir -p \ /var/lib/kube-proxy \ /var/lib/kubernetes \ /var/run/kubernetes -``` +~~~ Install the worker binaries: -``` -{ +~~~sh sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc sudo mv runc.amd64 runc chmod +x kubectl kube-proxy kubelet runc runsc @@ -64,21 +61,20 @@ Install the worker binaries: sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/ sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C / -} -``` +~~~ ### Configure CNI Networking Retrieve the Pod CIDR range for the current compute instance: -``` +~~~sh POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) -``` +~~~ Create the `bridge` network configuration file: -``` +~~~sh cat < Untrusted workloads will be run using the gVisor (runsc) runtime. Create the `containerd.service` systemd unit file: -``` +~~~sh cat < The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`. Create the `kubelet.service` systemd unit file: -``` +~~~sh cat < Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`. @@ -287,18 +279,18 @@ EOF List the registered Kubernetes nodes: -``` +~~~sh gcloud compute ssh controller-0 \ --command "kubectl get nodes --kubeconfig admin.kubeconfig" -``` +~~~ > output -``` +~~~ NAME STATUS ROLES AGE VERSION worker-0 Ready 35s v1.12.0 worker-1 Ready 36s v1.12.0 worker-2 Ready 36s v1.12.0 -``` +~~~ Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md) diff --git a/docs/10-configuring-kubectl.md b/docs/10-configuring-kubectl.md index 8ac0026..01fb381 100644 --- a/docs/10-configuring-kubectl.md +++ b/docs/10-configuring-kubectl.md @@ -10,8 +10,7 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high Generate a kubeconfig file suitable for authenticating as the `admin` user: -``` -{ +~~~sh KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') @@ -30,41 +29,40 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user: --user=admin kubectl config use-context kubernetes-the-hard-way -} -``` +~~~ ## Verification Check the health of the remote Kubernetes cluster: -``` +~~~sh kubectl get componentstatuses -``` +~~~ > output -``` +~~~ NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} -``` +~~~ List the nodes in the remote Kubernetes cluster: -``` +~~~sh kubectl get nodes -``` +~~~ > output -``` +~~~ NAME STATUS ROLES AGE VERSION worker-0 Ready 117s v1.12.0 worker-1 Ready 118s v1.12.0 worker-2 Ready 118s v1.12.0 -``` +~~~ Next: [Provisioning Pod Network Routes](11-pod-network-routes.md) diff --git a/docs/11-pod-network-routes.md b/docs/11-pod-network-routes.md index c9f0b6a..dd01181 100644 --- a/docs/11-pod-network-routes.md +++ b/docs/11-pod-network-routes.md @@ -12,49 +12,49 @@ In this section you will gather the information required to create routes in the Print the internal IP address and Pod CIDR range for each worker instance: -``` +~~~sh for instance in worker-0 worker-1 worker-2; do gcloud compute instances describe ${instance} \ --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' done -``` +~~~ > output -``` +~~~ 10.240.0.20 10.200.0.0/24 10.240.0.21 10.200.1.0/24 10.240.0.22 10.200.2.0/24 -``` +~~~ ## Routes Create network routes for each worker instance: -``` +~~~sh for i in 0 1 2; do gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ --network kubernetes-the-hard-way \ --next-hop-address 10.240.0.2${i} \ --destination-range 10.200.${i}.0/24 done -``` +~~~ List the routes in the `kubernetes-the-hard-way` VPC network: -``` +~~~sh gcloud compute routes list --filter "network: kubernetes-the-hard-way" -``` +~~~ > output -``` +~~~ NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY default-route-081879136902de56 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000 default-route-55199a5aa126d7aa kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000 kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000 kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000 -``` +~~~ Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md) diff --git a/docs/12-dns-addon.md b/docs/12-dns-addon.md index 67c5e5b..6633903 100644 --- a/docs/12-dns-addon.md +++ b/docs/12-dns-addon.md @@ -6,76 +6,76 @@ In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts Deploy the `coredns` cluster add-on: -``` +~~~sh kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml -``` +~~~ > output -``` +~~~ serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.extensions/coredns created service/kube-dns created -``` +~~~ List the pods created by the `kube-dns` deployment: -``` +~~~sh kubectl get pods -l k8s-app=kube-dns -n kube-system -``` +~~~ > output -``` +~~~ NAME READY STATUS RESTARTS AGE coredns-699f8ddd77-94qv9 1/1 Running 0 20s coredns-699f8ddd77-gtcgb 1/1 Running 0 20s -``` +~~~ ## Verification Create a `busybox` deployment: -``` +~~~sh kubectl run busybox --image=busybox:1.28 --command -- sleep 3600 -``` +~~~ List the pod created by the `busybox` deployment: -``` +~~~sh kubectl get pods -l run=busybox -``` +~~~ > output -``` +~~~ NAME READY STATUS RESTARTS AGE busybox-bd8fb7cbd-vflm9 1/1 Running 0 10s -``` +~~~ Retrieve the full name of the `busybox` pod: -``` +~~~sh POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") -``` +~~~ Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod: -``` +~~~sh kubectl exec -ti $POD_NAME -- nslookup kubernetes -``` +~~~ > output -``` +~~~ Server: 10.32.0.10 Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local -``` +~~~ Next: [Smoke Test](13-smoke-test.md) diff --git a/docs/13-smoke-test.md b/docs/13-smoke-test.md index f302909..a47f3c8 100644 --- a/docs/13-smoke-test.md +++ b/docs/13-smoke-test.md @@ -8,14 +8,14 @@ In this section you will verify the ability to [encrypt secret data at rest](htt Create a generic secret: -``` +~~~sh kubectl create secret generic kubernetes-the-hard-way \ --from-literal="mykey=mydata" -``` +~~~ Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: -``` +~~~sh gcloud compute ssh controller-0 \ --command "sudo ETCDCTL_API=3 etcdctl get \ --endpoints=https://127.0.0.1:2379 \ @@ -23,11 +23,11 @@ gcloud compute ssh controller-0 \ --cert=/etc/etcd/kubernetes.pem \ --key=/etc/etcd/kubernetes-key.pem\ /registry/secrets/default/kubernetes-the-hard-way | hexdump -C" -``` +~~~ > output -``` +~~~ 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret| 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern| 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa| @@ -44,7 +44,7 @@ gcloud compute ssh controller-0 \ 000000d0 18 28 f4 33 42 d9 57 d9 e3 e9 1c 38 e3 bc 1e c3 |.(.3B.W....8....| 000000e0 d2 47 f3 20 60 be b8 57 a7 0a |.G. `..W..| 000000ea -``` +~~~ The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key. @@ -54,22 +54,22 @@ In this section you will verify the ability to create and manage [Deployments](h Create a deployment for the [nginx](https://nginx.org/en/) web server: -``` +~~~sh kubectl run nginx --image=nginx -``` +~~~ List the pod created by the `nginx` deployment: -``` +~~~sh kubectl get pods -l run=nginx -``` +~~~ > output -``` +~~~ NAME READY STATUS RESTARTS AGE nginx-dbddb74b8-6lxg2 1/1 Running 0 10s -``` +~~~ ### Port Forwarding @@ -77,32 +77,32 @@ In this section you will verify the ability to access applications remotely usin Retrieve the full name of the `nginx` pod: -``` +~~~sh POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}") -``` +~~~ Forward port `8080` on your local machine to port `80` of the `nginx` pod: -``` +~~~sh kubectl port-forward $POD_NAME 8080:80 -``` +~~~ > output -``` +~~~ Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 -``` +~~~ In a new terminal make an HTTP request using the forwarding address: -``` +~~~sh curl --head http://127.0.0.1:8080 -``` +~~~ > output -``` +~~~ HTTP/1.1 200 OK Server: nginx/1.15.4 Date: Sun, 30 Sep 2018 19:23:10 GMT @@ -112,16 +112,16 @@ Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT Connection: keep-alive ETag: "5baa4e63-264" Accept-Ranges: bytes -``` +~~~ Switch back to the previous terminal and stop the port forwarding to the `nginx` pod: -``` +~~~sh Forwarding from 127.0.0.1:8080 -> 80 Forwarding from [::1]:8080 -> 80 Handling connection for 8080 ^C -``` +~~~ ### Logs @@ -129,15 +129,15 @@ In this section you will verify the ability to [retrieve container logs](https:/ Print the `nginx` pod logs: -``` +~~~sh kubectl logs $POD_NAME -``` +~~~ > output -``` +~~~ 127.0.0.1 - - [30/Sep/2018:19:23:10 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-" -``` +~~~ ### Exec @@ -145,15 +145,15 @@ In this section you will verify the ability to [execute commands in a container] Print the nginx version by executing the `nginx -v` command in the `nginx` container: -``` +~~~sh kubectl exec -ti $POD_NAME -- nginx -v -``` +~~~ > output -``` +~~~ nginx version: nginx/1.15.4 -``` +~~~ ## Services @@ -161,43 +161,43 @@ In this section you will verify the ability to expose applications using a [Serv Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service: -``` +~~~sh kubectl expose deployment nginx --port 80 --type NodePort -``` +~~~ > The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial. Retrieve the node port assigned to the `nginx` service: -``` +~~~sh NODE_PORT=$(kubectl get svc nginx \ --output=jsonpath='{range .spec.ports[0]}{.nodePort}') -``` +~~~ Create a firewall rule that allows remote access to the `nginx` node port: -``` +~~~sh gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ --allow=tcp:${NODE_PORT} \ --network kubernetes-the-hard-way -``` +~~~ Retrieve the external IP address of a worker instance: -``` +~~~sh EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') -``` +~~~ Make an HTTP request using the external IP address and the `nginx` node port: -``` +~~~sh curl -I http://${EXTERNAL_IP}:${NODE_PORT} -``` +~~~ > output -``` +~~~ HTTP/1.1 200 OK Server: nginx/1.15.4 Date: Sun, 30 Sep 2018 19:25:40 GMT @@ -207,7 +207,7 @@ Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT Connection: keep-alive ETag: "5baa4e63-264" Accept-Ranges: bytes -``` +~~~ ## Untrusted Workloads @@ -215,7 +215,7 @@ This section will verify the ability to run untrusted workloads using [gVisor](h Create the `untrusted` pod: -``` +~~~sh cat < output -``` +~~~ I0930 19:31:31.419765 21217 x:0] *************************** I0930 19:31:31.419907 21217 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5] I0930 19:31:31.419959 21217 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17 @@ -321,6 +321,6 @@ I0930 19:31:31.420676 21217 x:0] *************************** UID PID PPID C STIME TIME CMD 0 1 0 0 19:26 10ms app I0930 19:31:31.422022 21217 x:0] Exiting with status: 0 -``` +~~~ Next: [Cleaning Up](14-cleanup.md) diff --git a/docs/14-cleanup.md b/docs/14-cleanup.md index dc97a3a..bce3943 100644 --- a/docs/14-cleanup.md +++ b/docs/14-cleanup.md @@ -6,18 +6,17 @@ In this lab you will delete the compute resources created during this tutorial. Delete the controller and worker compute instances: -``` +~~~sh gcloud -q compute instances delete \ controller-0 controller-1 controller-2 \ worker-0 worker-1 worker-2 -``` +~~~ ## Networking Delete the external load balancer network resources: -``` -{ +~~~sh gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ --region $(gcloud config get-value compute/region) @@ -26,23 +25,21 @@ Delete the external load balancer network resources: gcloud -q compute http-health-checks delete kubernetes gcloud -q compute addresses delete kubernetes-the-hard-way -} -``` +~~~ Delete the `kubernetes-the-hard-way` firewall rules: -``` +~~~sh gcloud -q compute firewall-rules delete \ kubernetes-the-hard-way-allow-nginx-service \ kubernetes-the-hard-way-allow-internal \ kubernetes-the-hard-way-allow-external \ kubernetes-the-hard-way-allow-health-check -``` +~~~ Delete the `kubernetes-the-hard-way` network VPC: -``` -{ +~~~sh gcloud -q compute routes delete \ kubernetes-route-10-200-0-0-24 \ kubernetes-route-10-200-1-0-24 \ @@ -51,5 +48,4 @@ Delete the `kubernetes-the-hard-way` network VPC: gcloud -q compute networks subnets delete kubernetes gcloud -q compute networks delete kubernetes-the-hard-way -} -``` +~~~