diff --git a/docs/01-prerequisites.md b/docs/01-prerequisites.md index 79ff65a..706b605 100644 --- a/docs/01-prerequisites.md +++ b/docs/01-prerequisites.md @@ -44,4 +44,14 @@ gcloud config set compute/zone us-west1-c > Use the `gcloud compute zones list` command to view additional regions and zones. +## Important note for Windows users + +The commands for Windows in this tutorial are intended to be run using PowerShell and will +not work as intended using cmd. If you are at all unsure about what shell you're using +execute the following command: +``` +(dir 2>&1 *`|echo CMD);&<# rem #>echo PowerShell +``` +If it outputs `CMD` then execute `powershell.exe` before continuing. + Next: [Installing the Client Tools](02-client-tools.md) diff --git a/docs/02-client-tools.md b/docs/02-client-tools.md index e6b728d..317db85 100644 --- a/docs/02-client-tools.md +++ b/docs/02-client-tools.md @@ -44,6 +44,21 @@ sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson ``` +### Windows + +``` +Invoke-WebRequest -Uri https://pkg.cfssl.org/R1.2/cfssl_windows-amd64.exe -OutFile cfssl.exe +``` + +``` +Invoke-WebRequest -Uri https://pkg.cfssl.org/R1.2/cfssljson_windows-amd64.exe -OutFile cfssljson.exe +``` + +Add the current directory to the path (this will not persist between sessions): +``` +$env:Path += ";$(Get-Location)" +``` + ### Verification Verify `cfssl` version 1.2.0 or higher is installed: @@ -94,6 +109,12 @@ chmod +x kubectl sudo mv kubectl /usr/local/bin/ ``` +### Windows + +``` +Invoke-WebRequest -Uri https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/windows/amd64/kubectl.exe -OutFile kubectl.exe +``` + ### Verification Verify `kubectl` version 1.9.0 or higher is installed: diff --git a/docs/03-compute-resources.md b/docs/03-compute-resources.md index ffe8ac3..6f0f8bb 100644 --- a/docs/03-compute-resources.md +++ b/docs/03-compute-resources.md @@ -24,18 +24,29 @@ A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: +##### Linux & OS X ``` gcloud compute networks subnets create kubernetes \ --network kubernetes-the-hard-way \ --range 10.240.0.0/24 ``` +#### Windows + +``` +gcloud compute networks subnets create kubernetes ` + --network kubernetes-the-hard-way ` + --range 10.240.0.0/24 +``` + > The `10.240.0.0/24` IP address range can host up to 254 compute instances. ### Firewall Rules Create a firewall rule that allows internal communication across all protocols: +#### Linux & OS X + ``` gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --allow tcp,udp,icmp \ @@ -43,8 +54,19 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --source-ranges 10.240.0.0/24,10.200.0.0/16 ``` +#### Windows + +``` +gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal ` + --allow tcp,udp,icmp ` + --network kubernetes-the-hard-way ` + --source-ranges 10.240.0.0/24,10.200.0.0/16 +``` + Create a firewall rule that allows external SSH, ICMP, and HTTPS: +#### Linux & OS X + ``` gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ --allow tcp:22,tcp:6443,icmp \ @@ -52,6 +74,15 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ --source-ranges 0.0.0.0/0 ``` +#### Windows + +``` +gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external ` + --allow tcp:22,tcp:6443,icmp ` + --network kubernetes-the-hard-way ` + --source-ranges 0.0.0.0/0 +``` + > An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients. List the firewall rules in the `kubernetes-the-hard-way` VPC network: @@ -72,11 +103,20 @@ kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers: +#### Linux & OS X + ``` gcloud compute addresses create kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) ``` +#### Windows + +``` +gcloud compute addresses create kubernetes-the-hard-way ` + --region $(gcloud config get-value compute/region) +``` + Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region: ``` @@ -98,6 +138,8 @@ The compute instances in this lab will be provisioned using [Ubuntu Server](http Create three compute instances which will host the Kubernetes control plane: +#### Linux & OS X + ``` for i in 0 1 2; do gcloud compute instances create controller-${i} \ @@ -114,6 +156,24 @@ for i in 0 1 2; do done ``` +#### Windows + +``` +@(0,1,2) | ForEach-Object { + gcloud compute instances create controller-$_ ` + --async ` + --boot-disk-size 200GB ` + --can-ip-forward ` + --image-family ubuntu-1604-lts ` + --image-project ubuntu-os-cloud ` + --machine-type n1-standard-1 ` + --private-network-ip 10.240.0.1$_ ` + --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring ` + --subnet kubernetes ` + --tags kubernetes-the-hard-way,controller +} +``` + ### Kubernetes Workers Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The `pod-cidr` instance metadata will be used to expose pod subnet allocations to compute instances at runtime. @@ -122,6 +182,8 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste Create three compute instances which will host the Kubernetes worker nodes: +#### Linux & OS X + ``` for i in 0 1 2; do gcloud compute instances create worker-${i} \ @@ -139,6 +201,25 @@ for i in 0 1 2; do done ``` +#### Windows + +``` +@(0,1,2) | ForEach-Object { + gcloud compute instances create worker-$_ ` + --async ` + --boot-disk-size 200GB ` + --can-ip-forward ` + --image-family ubuntu-1604-lts ` + --image-project ubuntu-os-cloud ` + --machine-type n1-standard-1 ` + --metadata pod-cidr=10.200.$_.0/24 ` + --private-network-ip 10.240.0.2$_ ` + --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring ` + --subnet kubernetes ` + --tags kubernetes-the-hard-way,worker +} +``` + ### Verification List the compute instances in your default compute zone: diff --git a/docs/04-certificate-authority.md b/docs/04-certificate-authority.md index 7229356..01ad29b 100644 --- a/docs/04-certificate-authority.md +++ b/docs/04-certificate-authority.md @@ -8,6 +8,7 @@ In this section you will provision a Certificate Authority that can be used to g Create the CA configuration file: +#### Linux & OS X ``` cat > ca-config.json < ca-config.json < ca-csr.json < ca-csr.json < admin-csr.json < admin-csr.json < ${instance}-csr.json < kube-proxy-csr.json < kube-proxy-csr.json < kubernetes-csr.json < kubernetes-csr.json < The `kube-proxy` and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab. Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md) diff --git a/docs/05-kubernetes-configuration-files.md b/docs/05-kubernetes-configuration-files.md index 0b8974b..8d8f8eb 100644 --- a/docs/05-kubernetes-configuration-files.md +++ b/docs/05-kubernetes-configuration-files.md @@ -14,18 +14,27 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high Retrieve the `kubernetes-the-hard-way` static IP address: +#### Linux & OS X ``` KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') ``` +#### Windows +``` +$KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way ` + --region $(gcloud config get-value compute/region) ` + --format 'value(address)') +``` + ### The kubelet Kubernetes Configuration File When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/). Generate a kubeconfig file for each worker node: +#### Linux & OS X ``` for instance in worker-0 worker-1 worker-2; do kubectl config set-cluster kubernetes-the-hard-way \ @@ -49,6 +58,30 @@ for instance in worker-0 worker-1 worker-2; do done ``` +#### Windows +``` +@('worker-0','worker-1','worker-2') | ForEach-Object { + kubectl config set-cluster kubernetes-the-hard-way ` + --certificate-authority=ca.pem ` + --embed-certs=true ` + --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 ` + --kubeconfig=$_.kubeconfig + + kubectl config set-credentials system:node:$_ ` + --client-certificate=$_.pem ` + --client-key=$_-key.pem ` + --embed-certs=true ` + --kubeconfig=$_.kubeconfig + + kubectl config set-context default ` + --cluster=kubernetes-the-hard-way ` + --user=system:node:$_ ` + --kubeconfig=$_.kubeconfig + + kubectl config use-context default --kubeconfig=$_.kubeconfig +} +``` + Results: ``` @@ -61,6 +94,7 @@ worker-2.kubeconfig Generate a kubeconfig file for the `kube-proxy` service: +#### Linux & OS X ``` kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ @@ -88,14 +122,50 @@ kubectl config set-context default \ kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig ``` +#### Windows +``` +kubectl config set-cluster kubernetes-the-hard-way ` + --certificate-authority=ca.pem ` + --embed-certs=true ` + --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 ` + --kubeconfig=kube-proxy.kubeconfig +``` + +``` +kubectl config set-credentials kube-proxy ` + --client-certificate=kube-proxy.pem ` + --client-key=kube-proxy-key.pem ` + --embed-certs=true ` + --kubeconfig=kube-proxy.kubeconfig +``` + +``` +kubectl config set-context default ` + --cluster=kubernetes-the-hard-way ` + --user=kube-proxy ` + --kubeconfig=kube-proxy.kubeconfig +``` + +``` +kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig +``` + ## Distribute the Kubernetes Configuration Files Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance: +#### Linux & OS X ``` for instance in worker-0 worker-1 worker-2; do gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ done ``` +#### Windows +``` +@('worker-0','worker-1','worker-2') | ForEach-Object { + gcloud compute scp "$_.kubeconfig" kube-proxy.kubeconfig ${_}:/home/$env:USERNAME/ +} +``` + Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md) diff --git a/docs/06-data-encryption-keys.md b/docs/06-data-encryption-keys.md index 233bce2..db2a040 100644 --- a/docs/06-data-encryption-keys.md +++ b/docs/06-data-encryption-keys.md @@ -8,14 +8,21 @@ In this lab you will generate an encryption key and an [encryption config](https Generate an encryption key: +#### Linux & OS X ``` ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64) ``` +#### Windows +``` +$ENCRYPTION_KEY=[System.Convert]::ToBase64String($(0..31 | ForEach-Object { Get-Random -Minimum 0 -Maximum 255 } )) +``` + ## The Encryption Config File Create the `encryption-config.yaml` encryption config file: +#### Linux & OS X ``` cat > encryption-config.yaml < output ``` diff --git a/docs/10-configuring-kubectl.md b/docs/10-configuring-kubectl.md index 3d63825..e77d9d9 100644 --- a/docs/10-configuring-kubectl.md +++ b/docs/10-configuring-kubectl.md @@ -10,14 +10,23 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high Retrieve the `kubernetes-the-hard-way` static IP address: +#### Linux & OS X ``` KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') ``` +#### Windows +``` +$KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way ` + --region $(gcloud config get-value compute/region) ` + --format 'value(address)') +``` + Generate a kubeconfig file suitable for authenticating as the `admin` user: +#### Linux & OS X ``` kubectl config set-cluster kubernetes-the-hard-way \ --certificate-authority=ca.pem \ @@ -41,6 +50,30 @@ kubectl config set-context kubernetes-the-hard-way \ kubectl config use-context kubernetes-the-hard-way ``` +#### Windows +``` +kubectl config set-cluster kubernetes-the-hard-way ` + --certificate-authority=ca.pem ` + --embed-certs=true ` + --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 +``` + +``` +kubectl config set-credentials admin ` + --client-certificate=admin.pem ` + --client-key=admin-key.pem +``` + +``` +kubectl config set-context kubernetes-the-hard-way ` + --cluster=kubernetes-the-hard-way ` + --user=admin +``` + +``` +kubectl config use-context kubernetes-the-hard-way +``` + ## Verification Check the health of the remote Kubernetes cluster: diff --git a/docs/11-pod-network-routes.md b/docs/11-pod-network-routes.md index f0d39be..747ff9d 100644 --- a/docs/11-pod-network-routes.md +++ b/docs/11-pod-network-routes.md @@ -12,13 +12,21 @@ In this section you will gather the information required to create routes in the Print the internal IP address and Pod CIDR range for each worker instance: +#### Linux & OS X ``` for instance in worker-0 worker-1 worker-2; do gcloud compute instances describe ${instance} \ - --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' + --format '(networkInterfaces[0].networkIP,metadata.items[0].value)' done ``` +#### Windows +``` +@('worker-0','worker-1','worker-2') | ForEach-Object { + gcloud compute instances describe $_ ` + --format "value[separator=' '](networkInterfaces[0].networkIP,metadata.items[0].value)" +} +``` > output ``` @@ -31,6 +39,7 @@ done Create network routes for each worker instance: +#### Linux & OS X ``` for i in 0 1 2; do gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ @@ -40,6 +49,16 @@ for i in 0 1 2; do done ``` +#### Windows +``` +@(0, 1, 2) | ForEach-Object { + gcloud compute routes create kubernetes-route-10-200-${_}-0-24 ` + --network kubernetes-the-hard-way ` + --next-hop-address 10.240.0.2${_} ` + --destination-range 10.200.${_}.0/24 +} +``` + List the routes in the `kubernetes-the-hard-way` VPC network: ``` diff --git a/docs/12-dns-addon.md b/docs/12-dns-addon.md index b7ad32a..59285e5 100644 --- a/docs/12-dns-addon.md +++ b/docs/12-dns-addon.md @@ -56,10 +56,16 @@ busybox-2125412808-mt2vb 1/1 Running 0 15s Retrieve the full name of the `busybox` pod: +#### Linux & OS X ``` POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") ``` +#### Windows +``` +$POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") +``` + Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod: ``` diff --git a/docs/13-smoke-test.md b/docs/13-smoke-test.md index 7e91805..18e9269 100644 --- a/docs/13-smoke-test.md +++ b/docs/13-smoke-test.md @@ -8,18 +8,32 @@ In this section you will verify the ability to [encrypt secret data at rest](htt Create a generic secret: +### Linux & OS X ``` kubectl create secret generic kubernetes-the-hard-way \ --from-literal="mykey=mydata" ``` +#### Windows +``` +kubectl create secret generic kubernetes-the-hard-way ` + --from-literal="mykey=mydata" +``` + Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: +#### Linux & OS X ``` gcloud compute ssh controller-0 \ --command "ETCDCTL_API=3 etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C" ``` +#### Windows +``` +gcloud compute ssh controller-0 ` + --command "ETCDCTL_API=3 etcdctl get /registry/secrets/default/kubernetes-the-hard-way" | Format-Hex +``` + > output ``` @@ -72,10 +86,16 @@ In this section you will verify the ability to access applications remotely usin Retrieve the full name of the `nginx` pod: +#### Linux & OS X ``` POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}") ``` +#### Windows +``` +$POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}") +``` + Forward port `8080` on your local machine to port `80` of the `nginx` pod: ``` @@ -91,10 +111,16 @@ Forwarding from [::1]:8080 -> 80 In a new terminal make an HTTP request using the forwarding address: +#### Linux & OS X ``` curl --head http://127.0.0.1:8080 ``` +#### Windows +``` +(Invoke-WebRequest -Method HEAD http://127.0.0.1:8080).RawContent +``` + > output ``` @@ -164,32 +190,61 @@ kubectl expose deployment nginx --port 80 --type NodePort Retrieve the node port assigned to the `nginx` service: +#### Linux & OS X ``` NODE_PORT=$(kubectl get svc nginx \ --output=jsonpath='{range .spec.ports[0]}{.nodePort}') ``` +#### Windows +``` +$NODE_PORT=$(kubectl get svc nginx ` + --output=jsonpath='{range .spec.ports[0]}{.nodePort}') +``` + Create a firewall rule that allows remote access to the `nginx` node port: +#### Linux & OS X ``` gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ --allow=tcp:${NODE_PORT} \ --network kubernetes-the-hard-way ``` +#### Windows +``` +gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service ` + --allow=tcp:${NODE_PORT} ` + --network kubernetes-the-hard-way +``` + Retrieve the external IP address of a worker instance: +#### Linux & OS X ``` EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') ``` +#### Windows +``` +$EXTERNAL_IP=$(gcloud compute instances describe worker-0 ` + --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') +``` + + Make an HTTP request using the external IP address and the `nginx` node port: +#### Linux & OS X ``` curl -I http://${EXTERNAL_IP}:${NODE_PORT} ``` +#### Windows +``` +(Invoke-WebRequest -Method HEAD http://${EXTERNAL_IP}:${NODE_PORT}).RawContent +``` + > output ``` diff --git a/docs/14-cleanup.md b/docs/14-cleanup.md index d9084c8..690437a 100644 --- a/docs/14-cleanup.md +++ b/docs/14-cleanup.md @@ -6,16 +6,25 @@ In this labs you will delete the compute resources created during this tutorial. Delete the controller and worker compute instances: +#### Linux & OS X ``` gcloud -q compute instances delete \ controller-0 controller-1 controller-2 \ worker-0 worker-1 worker-2 ``` +#### Windows +``` +gcloud -q compute instances delete ` + controller-0 controller-1 controller-2 ` + worker-0 worker-1 worker-2 +``` + ## Networking Delete the external load balancer network resources: +#### Linux & OS X ``` gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ --region $(gcloud config get-value compute/region) @@ -25,6 +34,16 @@ gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ gcloud -q compute target-pools delete kubernetes-target-pool ``` +#### Windows +``` +gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule ` + --region $(gcloud config get-value compute/region) +``` + +``` +gcloud -q compute target-pools delete kubernetes-target-pool +``` + Delete the `kubernetes-the-hard-way` static IP address: ``` @@ -33,6 +52,7 @@ gcloud -q compute addresses delete kubernetes-the-hard-way Delete the `kubernetes-the-hard-way` firewall rules: +#### Linux & OS X ``` gcloud -q compute firewall-rules delete \ kubernetes-the-hard-way-allow-nginx-service \ @@ -40,8 +60,17 @@ gcloud -q compute firewall-rules delete \ kubernetes-the-hard-way-allow-external ``` +#### Windows +``` +gcloud -q compute firewall-rules delete ` + kubernetes-the-hard-way-allow-nginx-service ` + kubernetes-the-hard-way-allow-internal ` + kubernetes-the-hard-way-allow-external +``` + Delete the Pod network routes: +#### Linux & OS X ``` gcloud -q compute routes delete \ kubernetes-route-10-200-0-0-24 \ @@ -49,6 +78,14 @@ gcloud -q compute routes delete \ kubernetes-route-10-200-2-0-24 ``` +#### Windows +``` +gcloud -q compute routes delete ` + kubernetes-route-10-200-0-0-24 ` + kubernetes-route-10-200-1-0-24 ` + kubernetes-route-10-200-2-0-24 +``` + Delete the `kubernetes` subnet: ``` @@ -60,3 +97,15 @@ Delete the `kubernetes-the-hard-way` network VPC: ``` gcloud -q compute networks delete kubernetes-the-hard-way ``` + +## CA Certificate + +#### Windows + +Remove the CA certificate from the Root Certificates keystore: + +``` +Get-ChildItem -Path Cert:\CurrentUser\Root\ | Where-Object { + $_.Thumbprint -eq (Get-PfxCertificate .\ca.pem).Thumbprint } | Remove-Item +``` +Confirm the certificate details in the confirmation dialog box, and click Yes to continue.