diff --git a/docs/03-compute-resources.md b/docs/03-compute-resources.md index a30c520..524a8a1 100644 --- a/docs/03-compute-resources.md +++ b/docs/03-compute-resources.md @@ -32,6 +32,25 @@ gcloud compute networks subnets create kubernetes \ > The `10.240.0.0/24` IP address range can host up to 254 compute instances. +### Cloud NAT + +In this tutorial we will be setting up kubernetes with private nodes (that is the nodes doesn't have a Public IP). We need a way for the nodes to connect to the internet (to download container images when we deploy an application for example), that's what a NAT Gateway is used for, we will be using [Google Cloud NAT](https://cloud.google.com/nat/docs/overview) which is a fully managed NAT gateway () + +Create a Google Cloud Router + +``` +gcloud compute routers create kube-nat-router --network kubernetes-the-hard-way +``` + +Create a Google Cloud NAT Gateway + +``` +gcloud compute routers nats create kube-nat-gateway \ + --router=kube-nat-router \ + --auto-allocate-nat-external-ips \ + --nat-all-subnet-ip-ranges +``` + ### Firewall Rules Create a firewall rule that allows internal communication across all protocols: @@ -43,15 +62,25 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --source-ranges 10.240.0.0/24,10.200.0.0/16 ``` -Create a firewall rule that allows external SSH, ICMP, and HTTPS: +Create a firewall rule that allows external ICMP, and HTTPS: ``` gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ - --allow tcp:22,tcp:6443,icmp \ + --allow tcp:6443,icmp \ --network kubernetes-the-hard-way \ --source-ranges 0.0.0.0/0 ``` +Create a firewall rule that allows the IAP ([Identity Aware Proxy](https://cloud.google.com/iap/docs/concepts-overview)) netblock, this is required for configuring SSH with [IAP Forwarding](https://cloud.google.com/iap/docs/using-tcp-forwarding) later: + +``` +gcloud compute firewall-rules create kubernetes-the-hard-way-allow-iap \ + --allow tcp \ + --network kubernetes-the-hard-way \ + --source-ranges 35.235.240.0/20 +``` + + > An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients. List the firewall rules in the `kubernetes-the-hard-way` VPC network: @@ -110,7 +139,8 @@ for i in 0 1 2; do --private-network-ip 10.240.0.1${i} \ --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --subnet kubernetes \ - --tags kubernetes-the-hard-way,controller + --tags kubernetes-the-hard-way,controller \ + --no-address done ``` @@ -135,7 +165,8 @@ for i in 0 1 2; do --private-network-ip 10.240.0.2${i} \ --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --subnet kubernetes \ - --tags kubernetes-the-hard-way,worker + --tags kubernetes-the-hard-way,worker \ + --no-address done ``` @@ -161,13 +192,30 @@ worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX ## Configuring SSH Access -SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as described in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation. +SSH will be used to configure the controller and worker instances. But because our nodes are private we cannot ssh directly into them from the internet, instead we will be using [Identity Aware Proxy](https://cloud.google.com/iap/docs/concepts-overview) with a feature called [TCP forwarding](https://cloud.google.com/iap/docs/using-tcp-forwarding). + +Grant your current user the iap.tunnelResourceAccessor IAM role. + +``` +{ + export USER=$(gcloud auth list --format="value(account)") + + export PROJECT=$(gcloud config list --format="value(core.project)") + + gcloud projects add-iam-policy-binding $PROJECT \ + --member=user:$USER \ + --role=roles/iap.tunnelResourceAccessor +} +``` + +When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as described in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation. Test SSH access to the `controller-0` compute instances: ``` -gcloud compute ssh controller-0 +gcloud compute ssh --tunnel-through-iap controller-0 ``` +> The --tunnel-through-iap flag tells gcloud to skip trying to connect via public IP and directly use IAP. If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue: diff --git a/docs/04-certificate-authority.md b/docs/04-certificate-authority.md index 1510993..03402b2 100644 --- a/docs/04-certificate-authority.md +++ b/docs/04-certificate-authority.md @@ -132,9 +132,6 @@ cat > ${instance}-csr.json < Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`. @@ -287,7 +285,7 @@ In this section you will configure RBAC permissions to allow the Kubernetes API The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes. ``` -gcloud compute ssh controller-0 +gcloud compute ssh --tunnel-through-iap controller-0 ``` Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: diff --git a/docs/09-bootstrapping-kubernetes-workers.md b/docs/09-bootstrapping-kubernetes-workers.md index 6dd752d..bcc3769 100644 --- a/docs/09-bootstrapping-kubernetes-workers.md +++ b/docs/09-bootstrapping-kubernetes-workers.md @@ -7,7 +7,7 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: ``` -gcloud compute ssh worker-0 +gcloud compute ssh --tunnel-through-iap worker-0 ``` ### Running commands in parallel with tmux @@ -284,7 +284,7 @@ EOF { sudo systemctl daemon-reload sudo systemctl enable containerd kubelet kube-proxy - sudo systemctl start containerd kubelet kube-proxy + sudo systemctl status containerd kubelet kube-proxy } ``` @@ -297,7 +297,7 @@ EOF List the registered Kubernetes nodes: ``` -gcloud compute ssh controller-0 \ +gcloud compute ssh --tunnel-through-iap controller-0 \ --command "kubectl get nodes --kubeconfig admin.kubeconfig" ``` diff --git a/docs/13-smoke-test.md b/docs/13-smoke-test.md index ed90844..91bff85 100644 --- a/docs/13-smoke-test.md +++ b/docs/13-smoke-test.md @@ -16,7 +16,7 @@ kubectl create secret generic kubernetes-the-hard-way \ Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: ``` -gcloud compute ssh controller-0 \ +gcloud compute ssh --tunnel-through-iap controller-0 \ --command "sudo ETCDCTL_API=3 etcdctl get \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/etcd/ca.pem \ @@ -148,7 +148,7 @@ Print the nginx version by executing the `nginx -v` command in the `nginx` conta kubectl exec -ti $POD_NAME -- nginx -v ``` -> output +> output (your output might vary depending on the nginx version) ``` nginx version: nginx/1.17.3 @@ -166,32 +166,47 @@ kubectl expose deployment nginx --port 80 --type NodePort > The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial. -Retrieve the node port assigned to the `nginx` service: - +We will setup a TCP LoadBalancer with the worker nodes as target pool using the port allocated with the service of type NodePort, this will expose the nginx deployment to the internet (We have to use a Load Balancer because the nodes doesn't have a Public IP) ``` -NODE_PORT=$(kubectl get svc nginx \ + +{ + gcloud compute addresses create nginx-service\ + --region $(gcloud config get-value compute/region) + + NGINX_SERVICE_PUBLIC_ADDRESS=$(gcloud compute addresses describe nginx-service \ + --region $(gcloud config get-value compute/region) \ + --format 'value(address)') + + NODE_PORT=$(kubectl get svc nginx \ --output=jsonpath='{range .spec.ports[0]}{.nodePort}') + + gcloud compute firewall-rules create nginx-service \ + --network kubernetes-the-hard-way \ + --allow tcp:${NODE_PORT} + + gcloud compute http-health-checks create nginx-service \ + --description "Nginx Health Check" \ + --host "nginx.default.svc.cluster.local" \ + --port ${NODE_PORT} + + gcloud compute target-pools create nginx-service \ + --http-health-check nginx-service + + gcloud compute target-pools add-instances nginx-service \ + --instances worker-0,worker-1,worker-2 + + gcloud compute forwarding-rules create nginx-service \ + --address ${NGINX_SERVICE_PUBLIC_ADDRESS} \ + --ports ${NODE_PORT} \ + --region $(gcloud config get-value compute/region) \ + --target-pool nginx-target-pool +} ``` -Create a firewall rule that allows remote access to the `nginx` node port: +Make an HTTP request using the external IP address and the nginx node port: ``` -gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ - --allow=tcp:${NODE_PORT} \ - --network kubernetes-the-hard-way -``` - -Retrieve the external IP address of a worker instance: - -``` -EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ - --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') -``` - -Make an HTTP request using the external IP address and the `nginx` node port: - -``` -curl -I http://${EXTERNAL_IP}:${NODE_PORT} +curl -I http://${NGINX_SERVICE_PUBLIC_ADDRESS}:${NODE_PORT} ``` > output diff --git a/docs/14-cleanup.md b/docs/14-cleanup.md index 07be407..fdf1278 100644 --- a/docs/14-cleanup.md +++ b/docs/14-cleanup.md @@ -16,7 +16,6 @@ gcloud -q compute instances delete \ ## Networking Delete the external load balancer network resources: - ``` { gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ @@ -30,11 +29,27 @@ Delete the external load balancer network resources: } ``` +Delete the Nginx service external load balancer network resources: +``` +{ + gcloud -q compute forwarding-rules delete nginx-service \ + --region $(gcloud config get-value compute/region) + + gcloud -q compute target-pools delete nginx-service + + gcloud -q compute http-health-checks delete nginx-service + + gcloud -q compute addresses delete nginx-service + + gcloud -q compute firewall-rules delete nginx-service +} +``` + Delete the `kubernetes-the-hard-way` firewall rules: ``` gcloud -q compute firewall-rules delete \ - kubernetes-the-hard-way-allow-nginx-service \ + kubernetes-the-hard-way-allow-iap \ kubernetes-the-hard-way-allow-internal \ kubernetes-the-hard-way-allow-external \ kubernetes-the-hard-way-allow-health-check @@ -48,7 +63,10 @@ Delete the `kubernetes-the-hard-way` network VPC: kubernetes-route-10-200-0-0-24 \ kubernetes-route-10-200-1-0-24 \ kubernetes-route-10-200-2-0-24 - + + gcloud -q compute routers delete kube-nat-router \ + --region $(gcloud config get-value compute/region) + gcloud -q compute networks subnets delete kubernetes gcloud -q compute networks delete kubernetes-the-hard-way