235 lines
5.4 KiB
Markdown
235 lines
5.4 KiB
Markdown
# Bootstrapping the Kubernetes Worker Nodes
|
|
|
|
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [cri-containerd](https://github.com/containerd/cri-containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
|
|
|
|
## Prerequisites
|
|
|
|
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
|
|
|
|
```
|
|
gcloud compute ssh worker-0
|
|
```
|
|
|
|
## Provisioning a Kubernetes Worker Node
|
|
|
|
Install the OS dependencies:
|
|
|
|
```
|
|
sudo apt-get -y install socat
|
|
```
|
|
|
|
> The socat binary enables support for the `kubectl port-forward` command.
|
|
|
|
### Download and Install Worker Binaries
|
|
|
|
```
|
|
wget -q --show-progress --https-only --timestamping \
|
|
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
|
|
https://github.com/containerd/cri-containerd/releases/download/v1.0.0-beta.1/cri-containerd-1.0.0-beta.1.linux-amd64.tar.gz \
|
|
https://storage.googleapis.com/kubernetes-release/release/v1.9.6/bin/linux/amd64/kubectl \
|
|
https://storage.googleapis.com/kubernetes-release/release/v1.9.6/bin/linux/amd64/kube-proxy \
|
|
https://storage.googleapis.com/kubernetes-release/release/v1.9.6/bin/linux/amd64/kubelet
|
|
```
|
|
|
|
Create the installation directories:
|
|
|
|
```
|
|
sudo mkdir -p \
|
|
/etc/cni/net.d \
|
|
/opt/cni/bin \
|
|
/var/lib/kubelet \
|
|
/var/lib/kube-proxy \
|
|
/var/lib/kubernetes
|
|
```
|
|
|
|
Install the worker binaries:
|
|
|
|
```
|
|
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
|
|
```
|
|
|
|
```
|
|
sudo tar -xvf cri-containerd-1.0.0-beta.1.linux-amd64.tar.gz -C / --no-same-owner
|
|
```
|
|
|
|
```
|
|
chmod +x kubectl kube-proxy kubelet
|
|
```
|
|
|
|
```
|
|
sudo mv kubectl kube-proxy kubelet /usr/local/bin/
|
|
```
|
|
|
|
### Configure CNI Networking
|
|
|
|
Retrieve the Pod CIDR range for the current compute instance:
|
|
|
|
```
|
|
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
|
|
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
|
|
```
|
|
|
|
Create the `bridge` network configuration file:
|
|
|
|
```
|
|
cat > 10-bridge.conf <<EOF
|
|
{
|
|
"cniVersion": "0.3.1",
|
|
"name": "bridge",
|
|
"type": "bridge",
|
|
"bridge": "cnio0",
|
|
"isGateway": true,
|
|
"ipMasq": true,
|
|
"ipam": {
|
|
"type": "host-local",
|
|
"ranges": [
|
|
[{"subnet": "${POD_CIDR}"}]
|
|
],
|
|
"routes": [{"dst": "0.0.0.0/0"}]
|
|
}
|
|
}
|
|
EOF
|
|
```
|
|
|
|
Create the `loopback` network configuration file:
|
|
|
|
```
|
|
cat > 99-loopback.conf <<EOF
|
|
{
|
|
"cniVersion": "0.3.1",
|
|
"type": "loopback"
|
|
}
|
|
EOF
|
|
```
|
|
|
|
Move the network configuration files to the CNI configuration directory:
|
|
|
|
```
|
|
sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
|
|
```
|
|
|
|
### Configure the Kubelet
|
|
|
|
```
|
|
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
|
|
```
|
|
|
|
```
|
|
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
|
|
```
|
|
|
|
```
|
|
sudo mv ca.pem /var/lib/kubernetes/
|
|
```
|
|
|
|
Create the `kubelet.service` systemd unit file:
|
|
|
|
```
|
|
cat > kubelet.service <<EOF
|
|
[Unit]
|
|
Description=Kubernetes Kubelet
|
|
Documentation=https://github.com/kubernetes/kubernetes
|
|
After=cri-containerd.service
|
|
Requires=cri-containerd.service
|
|
|
|
[Service]
|
|
ExecStart=/usr/local/bin/kubelet \\
|
|
--allow-privileged=true \\
|
|
--anonymous-auth=false \\
|
|
--authorization-mode=Webhook \\
|
|
--client-ca-file=/var/lib/kubernetes/ca.pem \\
|
|
--cloud-provider= \\
|
|
--cluster-dns=10.32.0.10 \\
|
|
--cluster-domain=cluster.local \\
|
|
--container-runtime=remote \\
|
|
--container-runtime-endpoint=unix:///var/run/cri-containerd.sock \\
|
|
--image-pull-progress-deadline=2m \\
|
|
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
|
--network-plugin=cni \\
|
|
--pod-cidr=${POD_CIDR} \\
|
|
--register-node=true \\
|
|
--runtime-request-timeout=15m \\
|
|
--tls-cert-file=/var/lib/kubelet/${HOSTNAME}.pem \\
|
|
--tls-private-key-file=/var/lib/kubelet/${HOSTNAME}-key.pem \\
|
|
--v=2
|
|
Restart=on-failure
|
|
RestartSec=5
|
|
|
|
[Install]
|
|
WantedBy=multi-user.target
|
|
EOF
|
|
```
|
|
|
|
### Configure the Kubernetes Proxy
|
|
|
|
```
|
|
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
|
```
|
|
|
|
Create the `kube-proxy.service` systemd unit file:
|
|
|
|
```
|
|
cat > kube-proxy.service <<EOF
|
|
[Unit]
|
|
Description=Kubernetes Kube Proxy
|
|
Documentation=https://github.com/kubernetes/kubernetes
|
|
|
|
[Service]
|
|
ExecStart=/usr/local/bin/kube-proxy \\
|
|
--cluster-cidr=10.200.0.0/16 \\
|
|
--kubeconfig=/var/lib/kube-proxy/kubeconfig \\
|
|
--proxy-mode=iptables \\
|
|
--v=2
|
|
Restart=on-failure
|
|
RestartSec=5
|
|
|
|
[Install]
|
|
WantedBy=multi-user.target
|
|
EOF
|
|
```
|
|
|
|
### Start the Worker Services
|
|
|
|
```
|
|
sudo mv kubelet.service kube-proxy.service /etc/systemd/system/
|
|
```
|
|
|
|
```
|
|
sudo systemctl daemon-reload
|
|
```
|
|
|
|
```
|
|
sudo systemctl enable containerd cri-containerd kubelet kube-proxy
|
|
```
|
|
|
|
```
|
|
sudo systemctl start containerd cri-containerd kubelet kube-proxy
|
|
```
|
|
|
|
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
|
|
|
|
## Verification
|
|
|
|
Login to one of the controller nodes:
|
|
|
|
```
|
|
gcloud compute ssh controller-0
|
|
```
|
|
|
|
List the registered Kubernetes nodes:
|
|
|
|
```
|
|
kubectl get nodes
|
|
```
|
|
|
|
> output
|
|
|
|
```
|
|
NAME STATUS ROLES AGE VERSION
|
|
worker-0 Ready <none> 18s v1.9.0
|
|
worker-1 Ready <none> 18s v1.9.0
|
|
worker-2 Ready <none> 18s v1.9.0
|
|
```
|
|
|
|
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
|