# Bootstrapping Kubernetes Workers In this lab you will bootstrap 3 Kubernetes worker nodes. The following virtual machines will be used: * worker0 * worker1 * worker2 ## Why Kubernetes worker nodes are responsible for running your containers. All Kubernetes clusters need one or more worker nodes. We are running the worker nodes on dedicated machines for the following reasons: * Ease of deployment and configuration * Avoid mixing arbitrary workloads with critical cluster components. We are building machine with just enough resources so we don't have to worry about wasting resources. Some people would like to run workers and cluster services anywhere in the cluster. This is totally possible, and you'll have to decide what's best for your environment. ## Provision the Kubernetes Worker Nodes Run the following commands on `worker0`, `worker1`, `worker2`: ### Set the Kubernetes Public Address #### GCE ``` KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \ --region=us-central1 \ --format 'value(address)') ``` #### AWS ``` KUBERNETES_PUBLIC_ADDRESS=$(aws elb describe-load-balancers \ --load-balancer-name kubernetes | \ jq -r '.LoadBalancerDescriptions[].DNSName') ``` --- ``` sudo mkdir -p /var/lib/kubelet ``` ``` sudo mv bootstrap.kubeconfig kube-proxy.kubeconfig /var/lib/kubelet ``` #### Move the TLS certificates in place ``` sudo mkdir -p /var/lib/kubernetes ``` ``` sudo mv ca.pem /var/lib/kubernetes/ ``` #### Docker ``` wget https://get.docker.com/builds/Linux/x86_64/docker-1.12.6.tgz ``` ``` tar -xvf docker-1.12.6.tgz ``` ``` sudo cp docker/docker* /usr/bin/ ``` Create the Docker systemd unit file: ``` cat > docker.service < kubelet.service < ``` #### kube-proxy ``` cat > kube-proxy.service < Remember to run these steps on `worker0`, `worker1`, and `worker2`