Make the nodes private, add Cloud NAT, IAP for SSH and Expose the nginx service behind a TCP LB
parent
5c462220b7
commit
4a4eb25868
|
@ -32,6 +32,25 @@ gcloud compute networks subnets create kubernetes \
|
|||
|
||||
> The `10.240.0.0/24` IP address range can host up to 254 compute instances.
|
||||
|
||||
### Cloud NAT
|
||||
|
||||
In this tutorial we will be setting up kubernetes with private nodes (that is the nodes doesn't have a Public IP). We need a way for the nodes to connect to the internet (to download container images when we deploy an application for example), that's what a NAT Gateway is used for, we will be using [Google Cloud NAT](https://cloud.google.com/nat/docs/overview) which is a fully managed NAT gateway ()
|
||||
|
||||
Create a Google Cloud Router
|
||||
|
||||
```
|
||||
gcloud compute routers create kube-nat-router --network kubernetes-the-hard-way
|
||||
```
|
||||
|
||||
Create a Google Cloud NAT Gateway
|
||||
|
||||
```
|
||||
gcloud compute routers nats create kube-nat-gateway \
|
||||
--router=kube-nat-router \
|
||||
--auto-allocate-nat-external-ips \
|
||||
--nat-all-subnet-ip-ranges
|
||||
```
|
||||
|
||||
### Firewall Rules
|
||||
|
||||
Create a firewall rule that allows internal communication across all protocols:
|
||||
|
@ -43,15 +62,25 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
|
|||
--source-ranges 10.240.0.0/24,10.200.0.0/16
|
||||
```
|
||||
|
||||
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
|
||||
Create a firewall rule that allows external ICMP, and HTTPS:
|
||||
|
||||
```
|
||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
|
||||
--allow tcp:22,tcp:6443,icmp \
|
||||
--allow tcp:6443,icmp \
|
||||
--network kubernetes-the-hard-way \
|
||||
--source-ranges 0.0.0.0/0
|
||||
```
|
||||
|
||||
Create a firewall rule that allows the IAP ([Identity Aware Proxy](https://cloud.google.com/iap/docs/concepts-overview)) netblock, this is required for configuring SSH with [IAP Forwarding](https://cloud.google.com/iap/docs/using-tcp-forwarding) later:
|
||||
|
||||
```
|
||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-iap \
|
||||
--allow tcp \
|
||||
--network kubernetes-the-hard-way \
|
||||
--source-ranges 35.235.240.0/20
|
||||
```
|
||||
|
||||
|
||||
> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients.
|
||||
|
||||
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
|
||||
|
@ -110,7 +139,8 @@ for i in 0 1 2; do
|
|||
--private-network-ip 10.240.0.1${i} \
|
||||
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
|
||||
--subnet kubernetes \
|
||||
--tags kubernetes-the-hard-way,controller
|
||||
--tags kubernetes-the-hard-way,controller \
|
||||
--no-address
|
||||
done
|
||||
```
|
||||
|
||||
|
@ -135,7 +165,8 @@ for i in 0 1 2; do
|
|||
--private-network-ip 10.240.0.2${i} \
|
||||
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
|
||||
--subnet kubernetes \
|
||||
--tags kubernetes-the-hard-way,worker
|
||||
--tags kubernetes-the-hard-way,worker \
|
||||
--no-address
|
||||
done
|
||||
```
|
||||
|
||||
|
@ -161,13 +192,30 @@ worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX
|
|||
|
||||
## Configuring SSH Access
|
||||
|
||||
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as described in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
|
||||
SSH will be used to configure the controller and worker instances. But because our nodes are private we cannot ssh directly into them from the internet, instead we will be using [Identity Aware Proxy](https://cloud.google.com/iap/docs/concepts-overview) with a feature called [TCP forwarding](https://cloud.google.com/iap/docs/using-tcp-forwarding).
|
||||
|
||||
Grant your current user the iap.tunnelResourceAccessor IAM role.
|
||||
|
||||
```
|
||||
{
|
||||
export USER=$(gcloud auth list --format="value(account)")
|
||||
|
||||
export PROJECT=$(gcloud config list --format="value(core.project)")
|
||||
|
||||
gcloud projects add-iam-policy-binding $PROJECT \
|
||||
--member=user:$USER \
|
||||
--role=roles/iap.tunnelResourceAccessor
|
||||
}
|
||||
```
|
||||
|
||||
When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as described in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
|
||||
|
||||
Test SSH access to the `controller-0` compute instances:
|
||||
|
||||
```
|
||||
gcloud compute ssh controller-0
|
||||
gcloud compute ssh --tunnel-through-iap controller-0
|
||||
```
|
||||
> The --tunnel-through-iap flag tells gcloud to skip trying to connect via public IP and directly use IAP.
|
||||
|
||||
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue:
|
||||
|
||||
|
|
|
@ -132,9 +132,6 @@ cat > ${instance}-csr.json <<EOF
|
|||
}
|
||||
EOF
|
||||
|
||||
EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
|
||||
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
|
||||
|
||||
INTERNAL_IP=$(gcloud compute instances describe ${instance} \
|
||||
--format 'value(networkInterfaces[0].networkIP)')
|
||||
|
||||
|
@ -142,7 +139,7 @@ cfssl gencert \
|
|||
-ca=ca.pem \
|
||||
-ca-key=ca-key.pem \
|
||||
-config=ca-config.json \
|
||||
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
|
||||
-hostname=${instance},${INTERNAL_IP} \
|
||||
-profile=kubernetes \
|
||||
${instance}-csr.json | cfssljson -bare ${instance}
|
||||
done
|
||||
|
@ -396,7 +393,7 @@ Copy the appropriate certificates and private keys to each worker instance:
|
|||
|
||||
```
|
||||
for instance in worker-0 worker-1 worker-2; do
|
||||
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
|
||||
gcloud compute scp --tunnel-through-iap ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
|
||||
done
|
||||
```
|
||||
|
||||
|
@ -404,7 +401,7 @@ Copy the appropriate certificates and private keys to each controller instance:
|
|||
|
||||
```
|
||||
for instance in controller-0 controller-1 controller-2; do
|
||||
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
|
||||
gcloud compute scp --tunnel-through-iap ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
|
||||
service-account-key.pem service-account.pem ${instance}:~/
|
||||
done
|
||||
```
|
||||
|
|
|
@ -199,7 +199,7 @@ Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker
|
|||
|
||||
```
|
||||
for instance in worker-0 worker-1 worker-2; do
|
||||
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
|
||||
gcloud compute scp --tunnel-through-iap ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
|
||||
done
|
||||
```
|
||||
|
||||
|
@ -207,7 +207,7 @@ Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig f
|
|||
|
||||
```
|
||||
for instance in controller-0 controller-1 controller-2; do
|
||||
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
|
||||
gcloud compute scp --tunnel-through-iap admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
|
||||
done
|
||||
```
|
||||
|
||||
|
|
|
@ -36,7 +36,7 @@ Copy the `encryption-config.yaml` encryption config file to each controller inst
|
|||
|
||||
```
|
||||
for instance in controller-0 controller-1 controller-2; do
|
||||
gcloud compute scp encryption-config.yaml ${instance}:~/
|
||||
gcloud compute scp --tunnel-through-iap encryption-config.yaml ${instance}:~/
|
||||
done
|
||||
```
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ Kubernetes components are stateless and store cluster state in [etcd](https://gi
|
|||
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
|
||||
|
||||
```
|
||||
gcloud compute ssh controller-0
|
||||
gcloud compute ssh --tunnel-through-iap controller-0
|
||||
```
|
||||
|
||||
### Running commands in parallel with tmux
|
||||
|
|
|
@ -7,7 +7,7 @@ In this lab you will bootstrap the Kubernetes control plane across three compute
|
|||
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
|
||||
|
||||
```
|
||||
gcloud compute ssh controller-0
|
||||
gcloud compute ssh --tunnel-through-iap controller-0
|
||||
```
|
||||
|
||||
### Running commands in parallel with tmux
|
||||
|
@ -272,8 +272,6 @@ Content-Type: text/plain; charset=utf-8
|
|||
Content-Length: 2
|
||||
Connection: keep-alive
|
||||
X-Content-Type-Options: nosniff
|
||||
|
||||
ok
|
||||
```
|
||||
|
||||
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
|
||||
|
@ -287,7 +285,7 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
|
|||
The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
|
||||
|
||||
```
|
||||
gcloud compute ssh controller-0
|
||||
gcloud compute ssh --tunnel-through-iap controller-0
|
||||
```
|
||||
|
||||
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
||||
|
|
|
@ -7,7 +7,7 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp
|
|||
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
|
||||
|
||||
```
|
||||
gcloud compute ssh worker-0
|
||||
gcloud compute ssh --tunnel-through-iap worker-0
|
||||
```
|
||||
|
||||
### Running commands in parallel with tmux
|
||||
|
@ -284,7 +284,7 @@ EOF
|
|||
{
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable containerd kubelet kube-proxy
|
||||
sudo systemctl start containerd kubelet kube-proxy
|
||||
sudo systemctl status containerd kubelet kube-proxy
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -297,7 +297,7 @@ EOF
|
|||
List the registered Kubernetes nodes:
|
||||
|
||||
```
|
||||
gcloud compute ssh controller-0 \
|
||||
gcloud compute ssh --tunnel-through-iap controller-0 \
|
||||
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
|
||||
```
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ kubectl create secret generic kubernetes-the-hard-way \
|
|||
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
|
||||
|
||||
```
|
||||
gcloud compute ssh controller-0 \
|
||||
gcloud compute ssh --tunnel-through-iap controller-0 \
|
||||
--command "sudo ETCDCTL_API=3 etcdctl get \
|
||||
--endpoints=https://127.0.0.1:2379 \
|
||||
--cacert=/etc/etcd/ca.pem \
|
||||
|
@ -148,7 +148,7 @@ Print the nginx version by executing the `nginx -v` command in the `nginx` conta
|
|||
kubectl exec -ti $POD_NAME -- nginx -v
|
||||
```
|
||||
|
||||
> output
|
||||
> output (your output might vary depending on the nginx version)
|
||||
|
||||
```
|
||||
nginx version: nginx/1.17.3
|
||||
|
@ -166,32 +166,47 @@ kubectl expose deployment nginx --port 80 --type NodePort
|
|||
|
||||
> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
|
||||
|
||||
Retrieve the node port assigned to the `nginx` service:
|
||||
|
||||
We will setup a TCP LoadBalancer with the worker nodes as target pool using the port allocated with the service of type NodePort, this will expose the nginx deployment to the internet (We have to use a Load Balancer because the nodes doesn't have a Public IP)
|
||||
```
|
||||
NODE_PORT=$(kubectl get svc nginx \
|
||||
|
||||
{
|
||||
gcloud compute addresses create nginx-service\
|
||||
--region $(gcloud config get-value compute/region)
|
||||
|
||||
NGINX_SERVICE_PUBLIC_ADDRESS=$(gcloud compute addresses describe nginx-service \
|
||||
--region $(gcloud config get-value compute/region) \
|
||||
--format 'value(address)')
|
||||
|
||||
NODE_PORT=$(kubectl get svc nginx \
|
||||
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
|
||||
|
||||
gcloud compute firewall-rules create nginx-service \
|
||||
--network kubernetes-the-hard-way \
|
||||
--allow tcp:${NODE_PORT}
|
||||
|
||||
gcloud compute http-health-checks create nginx-service \
|
||||
--description "Nginx Health Check" \
|
||||
--host "nginx.default.svc.cluster.local" \
|
||||
--port ${NODE_PORT}
|
||||
|
||||
gcloud compute target-pools create nginx-service \
|
||||
--http-health-check nginx-service
|
||||
|
||||
gcloud compute target-pools add-instances nginx-service \
|
||||
--instances worker-0,worker-1,worker-2
|
||||
|
||||
gcloud compute forwarding-rules create nginx-service \
|
||||
--address ${NGINX_SERVICE_PUBLIC_ADDRESS} \
|
||||
--ports ${NODE_PORT} \
|
||||
--region $(gcloud config get-value compute/region) \
|
||||
--target-pool nginx-target-pool
|
||||
}
|
||||
```
|
||||
|
||||
Create a firewall rule that allows remote access to the `nginx` node port:
|
||||
Make an HTTP request using the external IP address and the nginx node port:
|
||||
|
||||
```
|
||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
|
||||
--allow=tcp:${NODE_PORT} \
|
||||
--network kubernetes-the-hard-way
|
||||
```
|
||||
|
||||
Retrieve the external IP address of a worker instance:
|
||||
|
||||
```
|
||||
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
|
||||
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
|
||||
```
|
||||
|
||||
Make an HTTP request using the external IP address and the `nginx` node port:
|
||||
|
||||
```
|
||||
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
|
||||
curl -I http://${NGINX_SERVICE_PUBLIC_ADDRESS}:${NODE_PORT}
|
||||
```
|
||||
|
||||
> output
|
||||
|
|
|
@ -16,7 +16,6 @@ gcloud -q compute instances delete \
|
|||
## Networking
|
||||
|
||||
Delete the external load balancer network resources:
|
||||
|
||||
```
|
||||
{
|
||||
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
|
||||
|
@ -30,11 +29,27 @@ Delete the external load balancer network resources:
|
|||
}
|
||||
```
|
||||
|
||||
Delete the Nginx service external load balancer network resources:
|
||||
```
|
||||
{
|
||||
gcloud -q compute forwarding-rules delete nginx-service \
|
||||
--region $(gcloud config get-value compute/region)
|
||||
|
||||
gcloud -q compute target-pools delete nginx-service
|
||||
|
||||
gcloud -q compute http-health-checks delete nginx-service
|
||||
|
||||
gcloud -q compute addresses delete nginx-service
|
||||
|
||||
gcloud -q compute firewall-rules delete nginx-service
|
||||
}
|
||||
```
|
||||
|
||||
Delete the `kubernetes-the-hard-way` firewall rules:
|
||||
|
||||
```
|
||||
gcloud -q compute firewall-rules delete \
|
||||
kubernetes-the-hard-way-allow-nginx-service \
|
||||
kubernetes-the-hard-way-allow-iap \
|
||||
kubernetes-the-hard-way-allow-internal \
|
||||
kubernetes-the-hard-way-allow-external \
|
||||
kubernetes-the-hard-way-allow-health-check
|
||||
|
@ -49,6 +64,9 @@ Delete the `kubernetes-the-hard-way` network VPC:
|
|||
kubernetes-route-10-200-1-0-24 \
|
||||
kubernetes-route-10-200-2-0-24
|
||||
|
||||
gcloud -q compute routers delete kube-nat-router \
|
||||
--region $(gcloud config get-value compute/region)
|
||||
|
||||
gcloud -q compute networks subnets delete kubernetes
|
||||
|
||||
gcloud -q compute networks delete kubernetes-the-hard-way
|
||||
|
|
Loading…
Reference in New Issue