diff --git a/README.md b/README.md index cc7865f..5c8c6b3 100644 --- a/README.md +++ b/README.md @@ -35,7 +35,7 @@ This tutorial assumes you have access to the [Google Cloud Platform](https://clo * [Bootstrapping the Kubernetes Control Plane](docs/08-bootstrapping-kubernetes-controllers.md) * [Bootstrapping the Kubernetes Worker Nodes](docs/09-bootstrapping-kubernetes-workers.md) * [Configuring kubectl for Remote Access](docs/10-configuring-kubectl.md) -* [Provisioning Pod Network Routes](docs/11-pod-network-routes.md) +* [Adding Pod Network Routes](docs/11-pod-network-routes.md) * [Deploying the DNS Cluster Add-on](docs/12-dns-addon.md) * [Smoke Test](docs/13-smoke-test.md) * [Cleaning Up](docs/14-cleanup.md) diff --git a/docs/11-pod-network-routes.md b/docs/11-pod-network-routes.md index c9f0b6a..fe47d41 100644 --- a/docs/11-pod-network-routes.md +++ b/docs/11-pod-network-routes.md @@ -1,60 +1,46 @@ -# Provisioning Pod Network Routes +# Adding Pod Network Routes -Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network [routes](https://cloud.google.com/compute/docs/vpc/routes). +Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network routes. -In this lab you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address. +In this chapter, you will create routes for each worker node that maps the node's Pod CIDR range to the node's IP address. > There are [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this) to implement the Kubernetes networking model. -## The Routing Table - -In this section you will gather the information required to create routes in the `kubernetes-the-hard-way` VPC network. - -Print the internal IP address and Pod CIDR range for each worker instance: - -``` -for instance in worker-0 worker-1 worker-2; do - gcloud compute instances describe ${instance} \ - --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' -done -``` - -> output - -``` -10.240.0.20 10.200.0.0/24 -10.240.0.21 10.200.1.0/24 -10.240.0.22 10.200.2.0/24 -``` +*The instructions in this chapter should be done in the host, not in the virtual machines.* ## Routes +Get the bridge name of `kubernetes-nw`. + +``` +$ KUBERNETES_BRIDGE=$(sudo virsh net-info kubernetes-nw | grep Bridge | awk '{ print $2}') +``` + Create network routes for each worker instance: ``` -for i in 0 1 2; do - gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ - --network kubernetes-the-hard-way \ - --next-hop-address 10.240.0.2${i} \ - --destination-range 10.200.${i}.0/24 +$ for i in 1 2 3; do + sudo ip route add 10.200.${i}.0/24 via 10.240.0.2${i} dev ${KUBERNETES_BRIDGE} done ``` -List the routes in the `kubernetes-the-hard-way` VPC network: +List the routes in the host: ``` -gcloud compute routes list --filter "network: kubernetes-the-hard-way" +$ ip route ``` > output ``` -NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY -default-route-081879136902de56 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000 -default-route-55199a5aa126d7aa kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 -kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000 -kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000 -kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000 +default via 172.16.0.1 dev wlp4s0 proto dhcp metric 600 +10.200.1.0/24 via 10.240.0.21 dev virbr1 +10.200.2.0/24 via 10.240.0.22 dev virbr1 +10.200.3.0/24 via 10.240.0.23 dev virbr1 +10.240.0.0/24 dev virbr1 proto kernel scope link src 10.240.0.1 + +(...) + ``` Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)