# Bootstrapping the Kubernetes Worker Nodes In this chapter, you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [gVisor](https://github.com/google/gvisor), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies). ## Prerequisites The commands in this chapter must be run on each worker node: `worker-1`, `worker-2`, and `worker-3`. Login to each worker node: ``` $ ssh -i ~/.ssh/id_rsa-k8s 10.240.0.21 ``` ### Running commands in parallel with tmux [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple virtual machines at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab. ## Provisioning a Kubernetes Worker Node Install the OS dependencies: ``` $ { sudo apt-get update sudo apt-get -y install socat conntrack ipset } ``` > The socat binary enables support for the `kubectl port-forward` command. ### Download and Install Worker Binaries ``` $ wget -q --show-progress --https-only --timestamping \ https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \ https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \ https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \ https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \ https://github.com/containerd/containerd/releases/download/v1.2.0-rc.0/containerd-1.2.0-rc.0.linux-amd64.tar.gz \ https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \ https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \ https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet ``` Create the installation directories: ``` $ sudo mkdir -p \ /etc/cni/net.d \ /opt/cni/bin \ /var/lib/kubelet \ /var/lib/kube-proxy \ /var/lib/kubernetes \ /var/run/kubernetes ``` Install the worker binaries: ``` $ { sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc sudo mv runc.amd64 runc chmod +x kubectl kube-proxy kubelet runc runsc sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/ sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/ sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C / } ``` ### Configure CNI Networking Get the Pod CIDR range for the current compute instance: ``` $ POD_CIDR=10.200.$(uname -n | awk -F"-" '{print $2}').0/24 ``` Create the `bridge` network configuration file: ``` $ cat < Untrusted workloads will be run using the gVisor (runsc) runtime. Create the `containerd.service` systemd unit file: ``` $ cat < The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`. Create the `kubelet.service` systemd unit file: ``` $ cat < Remember to run the above commands on each worker node: `worker-1`, `worker-2`, and `worker-3`. ## Verification > The virtual machines created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances. List the registered Kubernetes nodes: ``` $ ssh -i ~/.ssh/id_rsa-k8s.pub 10.240.0.11 "kubectl get nodes --kubeconfig admin.kubeconfig" ``` > output ``` NAME STATUS ROLES AGE VERSION worker-1 Ready 35s v1.12.0 worker-2 Ready 36s v1.12.0 worker-3 Ready 36s v1.12.0 ``` Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)