This commit is contained in:
Simon Willison
2017-08-30 21:24:46 +00:00
committed by GitHub

View File

@@ -36,14 +36,19 @@ gcloud config set compute/zone us-central1-f
## Setup Networking
Create a custom network:
Create a custom virtual network on GCP:
```
gcloud compute networks create kubernetes-the-hard-way --mode custom
```
Create a subnet for the Kubernetes cluster:
https://cloud.google.com/compute/docs/vpc/
A virtual network allows your machines to talk to each other over a private network, inaccessible from the outside world unless you create firewall rules to allow access.
The `--mode=custom` flag means you will need to create subnets within this network manually. `--mode=auto` would cause subnets to be created automatically.
Create a subnet called `kubernetes` for your instances:
```
gcloud compute networks subnets create kubernetes \
@@ -52,8 +57,20 @@ gcloud compute networks subnets create kubernetes \
--region us-central1
```
While your virtual network exists across all GCP regions, a subnet is a range of private IP addresses within a single region. Instances are created within a subnet.
`10.240.0.0/24` means IPs from `10.240.0.0` to `10.240.0.254`.
### Create Firewall Rules
https://cloud.google.com/compute/docs/vpc/firewalls
A GCP network also acts as a firewall. By default no connections are allowed from the outside world, and connections between instances are also forbidden. We can add firewall rules to allow our instances to talk to each other within the network.
Kubernetes pods are assigned their own IP addresses independent of the instances they are running on. We will be using the CIDR subnet `10.200.0.0/16` for this, configured in chapter 5 as the `--cluster-cidr` argument to `kube-controller-manager`.
Here we create a firewall rule called `allow-internal` which allows TCP, UDP and ICMP connections between the instances in your `10.240.0.0/24` subnet, and the Kubernetes pods that will live in the `10.200.0.0/16` range.
```
gcloud compute firewall-rules create allow-internal \
--allow tcp,udp,icmp \
@@ -61,6 +78,10 @@ gcloud compute firewall-rules create allow-internal \
--source-ranges 10.240.0.0/24,10.200.0.0/16
```
This rule (called `allow-external`) allows traffic on TCP port 22 (SSH), 3389 (unsure why, see [#160](https://github.com/kelseyhightower/kubernetes-the-hard-way/issues/160)) and port 6443 (kubernetes). It also allows ICMP traffic.
`0.0.0.0/0` means "apply to all ranges", hence this rule allows access to external traffic from outside our network.
```
gcloud compute firewall-rules create allow-external \
--allow tcp:22,tcp:3389,tcp:6443,icmp \
@@ -68,6 +89,12 @@ gcloud compute firewall-rules create allow-external \
--source-ranges 0.0.0.0/0
```
Finally we create a rule called `allow-healthz` to allow the Google Cloud Platform's healthcheck mechanism to access the Kubernetes `/_status/healthz` API, which runs on port 8080.
https://cloud.google.com/compute/docs/load-balancing/health-checks
GCP health check probes come from addresses in the ranges `130.211.0.0/22` and `35.191.0.0/16`, so we need to provide those as the `--source-ranges`:
```
gcloud compute firewall-rules create allow-healthz \
--allow tcp:8080 \
@@ -75,6 +102,7 @@ gcloud compute firewall-rules create allow-healthz \
--source-ranges 130.211.0.0/22,35.191.0.0/16
```
Our firewall rules should now look like this:
```
gcloud compute firewall-rules list --filter "network=kubernetes-the-hard-way"