# Bootstrapping the Kubernetes Worker Nodes In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [gVisor](https://github.com/google/gvisor), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies). ## Prerequisites The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
GCP ``` gcloud compute ssh worker-0 ```
AWS ``` VPC_ID="$(aws ec2 describe-vpcs \ --filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ --profile kubernetes-the-hard-way \ --query 'Vpcs[0].VpcId' \ --output text)" get_ip() { aws ec2 describe-instances \ --filters \ Name=vpc-id,Values="$VPC_ID" \ Name=tag:Name,Values="$1" \ --profile kubernetes-the-hard-way \ --query 'Reservations[0].Instances[0].PublicIpAddress' \ --output text } ``` ``` ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip worker-0)" ```
### Running commands in parallel with tmux [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab. ## Provisioning a Kubernetes Worker Node Install the OS dependencies: ``` { sudo apt-get update sudo apt-get -y install socat conntrack ipset } ``` > The socat binary enables support for the `kubectl port-forward` command. ### Download and Install Worker Binaries ``` wget -q --show-progress --https-only --timestamping \ https://github.com/kubernetes-incubator/cri-tools/releases/download/v1.0.0-beta.0/crictl-v1.0.0-beta.0-linux-amd64.tar.gz \ https://storage.googleapis.com/kubernetes-the-hard-way/runsc \ https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \ https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \ https://github.com/containerd/containerd/releases/download/v1.1.0/containerd-1.1.0.linux-amd64.tar.gz \ https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl \ https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-proxy \ https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubelet ``` Create the installation directories: ``` sudo mkdir -p \ /etc/cni/net.d \ /opt/cni/bin \ /var/lib/kubelet \ /var/lib/kube-proxy \ /var/lib/kubernetes \ /var/run/kubernetes ``` Install the worker binaries: ``` { chmod +x kubectl kube-proxy kubelet runc.amd64 runsc sudo mv runc.amd64 runc sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/ sudo tar -xvf crictl-v1.0.0-beta.0-linux-amd64.tar.gz -C /usr/local/bin/ sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ sudo tar -xvf containerd-1.1.0.linux-amd64.tar.gz -C / } ``` ### Configure CNI Networking Retrieve the Pod CIDR range for the current compute instance:
GCP ``` POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) ```
AWS ``` POD_CIDR="$(curl -s http://169.254.169.254/latest/user-data/|tr '|' '\n'|grep '^pod-cidr='|cut -d= -f2)" ```

Create the `bridge` network configuration file: ``` cat < Untrusted workloads will be run using the gVisor (runsc) runtime. Create the `containerd.service` systemd unit file: ``` cat < GCP ``` { sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/ sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig sudo mv ca.pem /var/lib/kubernetes/ } ```
AWS ``` WORKER_NAME="$(curl -s http://169.254.169.254/latest/user-data/|tr '|' '\n'|grep '^name='|cut -d= -f2)" sudo mv "$WORKER_NAME-key.pem" "$WORKER_NAME.pem" /var/lib/kubelet/ sudo mv "$WORKER_NAME.kubeconfig" /var/lib/kubelet/kubeconfig sudo mv ca.pem /var/lib/kubernetes/ ```

Create the `kubelet-config.yaml` configuration file:
GCP ``` cat <
AWS ``` cat <

Create the `kubelet.service` systemd unit file: ``` cat < Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`. ## Verification > The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances. List the registered Kubernetes nodes:
GCP ``` gcloud compute ssh controller-0 \ --command "kubectl get nodes --kubeconfig admin.kubeconfig" ```
AWS ``` ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip controller-0)" \ "kubectl get nodes --kubeconfig admin.kubeconfig" ```

> output ``` NAME STATUS ROLES AGE VERSION worker-0 Ready 20s v1.10.2 worker-1 Ready 20s v1.10.2 worker-2 Ready 20s v1.10.2 ``` Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)