# Bootstrapping the Kubernetes Worker Nodes In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [cri-o](https://github.com/kubernetes-incubator/cri-o), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies). ## Prerequisites The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: ``` gcloud compute ssh worker-0 ``` ## Provisioning a Kubernetes Worker Node ### Install the cri-o OS Dependencies Add the `alexlarsson/flatpak` [PPA](https://launchpad.net/ubuntu/+ppas) which hosts the `libostree` package: `libostree` is provided by the `ostree` RPM on RHEL7. ``` sudo add-apt-repository -y ppa:alexlarsson/flatpak ``` ``` sudo apt-get update ``` Install the OS dependencies required by the cri-o container runtime: ``` sudo apt-get install -y socat libgpgme11 libostree-1-1 ``` For RHEL, run: ``` sudo yum install -y socat device-mapper-libs ostree ``` Also for RHEL, the specific library version used to build crio has not yet been released. For now, symlink like so: ``` sudo ln -s /usr/lib64/libdevmapper.so.1.02 /usr/lib64/libdevmapper.so.1.02.1 ``` ### Download and Install Worker Binaries ``` curl \ -O https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \ -O https://github.com/opencontainers/runc/releases/download/v1.0.0-rc4/runc.amd64 \ -O https://storage.googleapis.com/kubernetes-the-hard-way/crio-amd64-v1.0.0-beta.0.tar.gz \ -O https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl \ -O https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-proxy \ -O https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubelet ``` Create the installation directories: ``` sudo mkdir -p \ /etc/containers \ /etc/cni/net.d \ /etc/crio \ /opt/cni/bin \ /usr/local/libexec/crio \ /var/lib/kubelet \ /var/lib/kube-proxy \ /var/lib/kubernetes \ /var/run/kubernetes ``` Install the worker binaries: ``` sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ ``` ``` tar -xvf crio-amd64-v1.0.0-beta.0.tar.gz ``` ``` chmod +x kubectl kube-proxy kubelet runc.amd64 ``` ``` sudo mv runc.amd64 /usr/local/bin/runc ``` ``` sudo mv crio crioctl kpod kubectl kube-proxy kubelet /usr/local/bin/ ``` ``` sudo mv conmon pause /usr/local/libexec/crio/ ``` ### Configure CNI Networking Retrieve the Pod CIDR range for the current compute instance: ``` POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) ``` Create the `bridge` network configuration file: ``` cat > 10-bridge.conf < 99-loopback.conf < crio.service < kubelet.service < kube-proxy.service < Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`. ## Verification Login to one of the controller nodes: ``` gcloud compute ssh controller-0 ``` List the registered Kubernetes nodes: ``` kubectl get nodes ``` > output ``` NAME STATUS AGE VERSION worker-0 Ready 5m v1.7.4 worker-1 Ready 3m v1.7.4 worker-2 Ready 7s v1.7.4 ``` Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)