# Bootstrapping the Kubernetes Worker Nodes In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [cri-containerd](https://github.com/containerd/cri-containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies). ## Prerequisites The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: ``` gcloud compute ssh worker-0 ``` ## Provisioning a Kubernetes Worker Node Install the OS dependencies: ``` sudo apt-get update sudo apt-get -y install socat ``` > The socat binary enables support for the `kubectl port-forward` command. ### Download and Install Worker Binaries ``` wget -q --show-progress --https-only --timestamping \ https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \ https://github.com/containerd/cri-containerd/releases/download/v1.0.0-beta.1/cri-containerd-1.0.0-beta.1.linux-amd64.tar.gz \ https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl \ https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-proxy \ https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubelet ``` Create the installation directories: ``` sudo mkdir -p \ /etc/cni/net.d \ /opt/cni/bin \ /var/lib/kubelet \ /var/lib/kube-proxy \ /var/lib/kubernetes \ /var/run/kubernetes ``` Install the worker binaries: ``` sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ ``` ``` sudo tar -xvf cri-containerd-1.0.0-beta.1.linux-amd64.tar.gz -C / ``` ``` chmod +x kubectl kube-proxy kubelet ``` ``` sudo mv kubectl kube-proxy kubelet /usr/local/bin/ ``` ### Configure CNI Networking Retrieve the Pod CIDR range for the current compute instance: ``` POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) ``` Create the `bridge` network configuration file: ``` cat > 10-bridge.conf < 99-loopback.conf < kubelet.service < kube-proxy.service < Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`. ## Verification Login to one of the controller nodes: ``` gcloud compute ssh controller-0 ``` List the registered Kubernetes nodes: ``` kubectl get nodes ``` > output ``` NAME STATUS ROLES AGE VERSION worker-0 Ready 18s v1.9.0 worker-1 Ready 18s v1.9.0 worker-2 Ready 18s v1.9.0 ``` Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)