updated 08-bootstrapping-kubernetes-controllers

pull/709/head
Xander Grzywinski 2019-05-24 10:24:53 -07:00
parent b03cb5ae94
commit 01cceca8b5
2 changed files with 82 additions and 61 deletions

View File

@ -144,25 +144,78 @@ az network public-ip show --resource-group kubernetes-the-hard-way --name kubern
}
```
## The Kubernetes Frontend Load Balancer
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way-ip` static IP address will be attached to the resulting load balancer.
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
### Provision a Network Load Balancer
Create the external load balancer network resources:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(az network public-ip show -g kubernetes-the-hard-way -n kubernetes-the-hard-way-ip --output tsv | cut -f6)
az network lb create \
--name kubernetes-the-hard-way-lb \
--resource-group kubernetes-the-hard-way \
--backend-pool-name kubernetes-the-hard-way-lb-pool \
--public-ip-address kubernetes-the-hard-way-ip \
--vent-name kubernetes-the-hard-way-vnet \
--subnet kubernetes-the-hard-way-subnet \
az network lb probe create \
--lb-name kubernetes-the-hard-way-lb \
--resource-group kubernetes-the-hard-way \
--name kubernetes-the-hard-way-lb-probe \
--port 80 \
--protocol tcp
az network lb rule create \
--resource-group kubernetes-the-hard-way \
--lb-name kubernetes-the-hard-way-lb \
--name kubernetes-the-hard-way-lb-rule \
--protocol tcp \
--frontend-port 6443 \
--backend-port 6443 \
--frontend-ip-name kubernetes-the-hard-way-ip \
--backend-pool-name kubernetes-the-hard-way-lb-pool \
--probe-name kubernetes-the-hard-way-lb-probe
}
```
## Compute Instances
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 18.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
### Kubernetes Controllers
Create three compute instances which will host the Kubernetes control plane:
Create three network interfaces and three compute instances which will host the Kubernetes control plane:
```
for i in 0 1 2; do
az network nic create \
--resource-group kubernetes-the-hard-way \
--name controller-${i}-nic
--vnet-name kubernetes-the-hard-way-vnet \
--subnet kubernetes-the-hard-way-subnet \
--network-security-group kubernetes-the-hard-way-nsg \
--private-ip-address 10.240.0.1${i} \
--lb-name kubernetes-the-hard-way-lb \
--lb-address-pools kubernetes-the-hard-way-lb-pool \
--ip-forwarding true
done
```
```
for i in 0 1 2; do
az vm create \
--name controller-${i} \
--resource-group kubernetes-the-hard-way
--no-wait \
--vent-name kubernetes-the-hard-way-vnet
--subnet kubernetes-the-hard-way-subnet
--nsg kubernetes-the-hard-way-nsg
--private-ip-address 10.240.0.1${i}
--public-ip-address-allocation Static
--nics controller-${i}-nic
--image Canonical:UbuntuServer:18.04-LTS:latest
--admin-username azureuser
--generate-ssh-keys
@ -179,17 +232,27 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste
Create three compute instances which will host the Kubernetes worker nodes:
```
for i in 0 1 2; do
az network nic create \
--resource-group kubernetes-the-hard-way \
--name worker-${i}-nic
--vnet-name kubernetes-the-hard-way-vnet \
--subnet kubernetes-the-hard-way-subnet \
--network-security-group kubernetes-the-hard-way-nsg \
--private-ip-address 10.240.0.2${i} \
--lb-name kubernetes-the-hard-way-lb \
--lb-address-pools kubernetes-the-hard-way-lb-pool \
--ip-forwarding true
done
```
```
for i in 0 1 2; do
az vm create \
--name worker-${i} \
--resource-group kubernetes-the-hard-way
--no-wait \
--vent-name kubernetes-the-hard-way-vnet
--subnet kubernetes-the-hard-way-subnet
--nsg kubernetes-the-hard-way-nsg
--private-ip-address 10.240.0.2${i}
--public-ip-address-allocation Static
--nics worker-${i}-nic
--image Canonical:UbuntuServer:18.04-LTS:latest
--admin-username azureuser
--generate-ssh-keys

View File

@ -4,10 +4,11 @@ In this lab you will bootstrap the Kubernetes control plane across three compute
## Prerequisites
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using `ssh`. Example:
```
gcloud compute ssh controller-0
EXTERNAL_IP=$(az vm show --show-details -g kubernetes-the-hard-way -n controller-0 --output tsv | cut -f19)
ssh azureuser@${EXTERNAL_IP}
```
### Running commands in parallel with tmux
@ -58,8 +59,7 @@ Install the Kubernetes binaries:
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
INTERNAL_IP=$(curl -H Metadata:true "http://169.254.169.254/metadata/instance/network/interface/0/ipv4/ipAddress/0/privateIpAddress?api-version=2017-08-01&format=text")
```
Create the `kube-apiserver.service` systemd unit file:
@ -202,7 +202,7 @@ EOF
### Enable HTTP Health Checks
A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
An [Azure Load Balancer](https://azure.microsoft.com/en-us/services/load-balancer/) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
> The `/healthz` API server endpoint does not require authentication by default.
@ -284,7 +284,8 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
```
gcloud compute ssh controller-0
EXTERNAL_IP=$(az vm show --show-details -g kubernetes-the-hard-way -n controller-0 --output tsv | cut -f19)
ssh azureuser@${EXTERNAL_IP}
```
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
@ -335,55 +336,12 @@ subjects:
EOF
```
## The Kubernetes Frontend Load Balancer
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer.
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
### Provision a Network Load Balancer
Create the external load balancer network resources:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
gcloud compute http-health-checks create kubernetes \
--description "Kubernetes Health Check" \
--host "kubernetes.default.svc.cluster.local" \
--request-path "/healthz"
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
--allow tcp
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
}
```
### Verification
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
KUBERNETES_PUBLIC_ADDRESS=$(az network public-ip show -g kubernetes-the-hard-way -n kubernetes-the-hard-way-ip --output tsv | cut -f6)
```
Make a HTTP request for the Kubernetes version info: