9.6 KiB
Bootstrapping an H/A Kubernetes Control Plane
In this lab you will bootstrap a 3 node Kubernetes controller cluster. The following virtual machines will be used:
- controller0
- controller1
- controller2
In this lab you will also create a frontend load balancer with a public IP address for remote access to the API servers and H/A.
Why
The Kubernetes components that make up the control plane include the following components:
- Kubernetes API Server
- Kubernetes Scheduler
- Kubernetes Controller Manager
Each component is being run on the same machines for the following reasons:
- The Scheduler and Controller Manager are tightly coupled with the API Server
- Only one Scheduler and Controller Manager can be active at a given time, but it's ok to run multiple at the same time. Each component will elect a leader via the API Server.
- Running multiple copies of each component is required for H/A
- Running each component next to the API Server eases configuration.
Provision the Kubernetes Controller Cluster
Run the following commands on controller0
, controller1
, controller2
:
TLS Certificates
The TLS certificates created in the Setting up a CA and TLS Cert Generation lab will be used to secure communication between the Kubernetes API server and Kubernetes clients such as kubectl
and the kubelet
agent. The TLS certificates will also be used to authenticate the Kubernetes API server to etcd via TLC client auth.
Copy the TLS certificates to the Kubernetes configuration directory:
sudo mkdir -p /var/lib/kubernetes
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /var/lib/kubernetes/
Download and install the Kubernetes controller binaries
Download the official Kubernetes release binaries:
wget https://storage.googleapis.com/kubernetes-release/release/v1.4.0/bin/linux/amd64/kube-apiserver
wget https://storage.googleapis.com/kubernetes-release/release/v1.4.0/bin/linux/amd64/kube-controller-manager
wget https://storage.googleapis.com/kubernetes-release/release/v1.4.0/bin/linux/amd64/kube-scheduler
wget https://storage.googleapis.com/kubernetes-release/release/v1.4.0/bin/linux/amd64/kubectl
Install the Kubernetes binaries:
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
Kubernetes API Server
Setup Authentication and Authorization
Authentication
Token based authentication will be used to limit access to the Kubernetes API. The authentication token is used by the following components:
- kubelet (client)
- Kubernetes API Server (server)
The other components, mainly the scheduler
and controller manager
, access the Kubernetes API server locally over the insecure API port which does not require authentication. The insecure port is only enabled for local access.
Download the example token file:
wget https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/token.csv
Review the example token file and replace the default token.
cat token.csv
Move the token file into the Kubernetes configuration directory so it can be read by the Kubernetes API server.
sudo mv token.csv /var/lib/kubernetes/
Authorization
Attribute-Based Access Control (ABAC) will be used to authorize access to the Kubernetes API. In this lab ABAC will be setup using the Kubernetes policy file backend as documented in the Kubernetes authorization guide.
Download the example authorization policy file:
wget https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/authorization-policy.jsonl
Review the example authorization policy file. No changes are required.
cat authorization-policy.jsonl
Move the authorization policy file into the Kubernetes configuration directory so it can be read by the Kubernetes API server.
sudo mv authorization-policy.jsonl /var/lib/kubernetes/
Create the systemd unit file
Capture the internal IP address:
GCE
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
AWS
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
Azure
INTERNAL_IP=$(ip addr show eth0 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
Create the systemd unit file:
cat > kube-apiserver.service <<"EOF"
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
--advertise-address=INTERNAL_IP \
--allow-privileged=true \
--apiserver-count=3 \
--authorization-mode=ABAC \
--authorization-policy-file=/var/lib/kubernetes/authorization-policy.jsonl \
--bind-address=0.0.0.0 \
--enable-swagger-ui=true \
--etcd-cafile=/var/lib/kubernetes/ca.pem \
--insecure-bind-address=0.0.0.0 \
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \
--service-account-key-file=/var/lib/kubernetes/kubernetes-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
--token-auth-file=/var/lib/kubernetes/token.csv \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
sed -i s/INTERNAL_IP/$INTERNAL_IP/g kube-apiserver.service
sudo mv kube-apiserver.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver
sudo systemctl start kube-apiserver
sudo systemctl status kube-apiserver --no-pager
Kubernetes Controller Manager
cat > kube-controller-manager.service <<"EOF"
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-controller-manager \
--allocate-node-cidrs=true \
--cluster-cidr=10.200.0.0/16 \
--cluster-name=kubernetes \
--leader-elect=true \
--master=http://INTERNAL_IP:8080 \
--root-ca-file=/var/lib/kubernetes/ca.pem \
--service-account-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
--service-cluster-ip-range=10.32.0.0/24 \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
sed -i s/INTERNAL_IP/$INTERNAL_IP/g kube-controller-manager.service
sudo mv kube-controller-manager.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable kube-controller-manager
sudo systemctl start kube-controller-manager
sudo systemctl status kube-controller-manager --no-pager
Kubernetes Scheduler
cat > kube-scheduler.service <<"EOF"
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/bin/kube-scheduler \
--leader-elect=true \
--master=http://INTERNAL_IP:8080 \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
sed -i s/INTERNAL_IP/$INTERNAL_IP/g kube-scheduler.service
sudo mv kube-scheduler.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable kube-scheduler
sudo systemctl start kube-scheduler
sudo systemctl status kube-scheduler --no-pager
Verification
kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
Remember to run these steps on
controller0
,controller1
, andcontroller2
Setup Kubernetes API Server Frontend Load Balancer
The virtual machines created in this tutorial will not have permission to complete this section. Run the following commands from the same place used to create the virtual machines for this tutorial.
GCE
gcloud compute http-health-checks create kube-apiserver-check \
--description "Kubernetes API Server Health Check" \
--port 8080 \
--request-path /healthz
gcloud compute target-pools create kubernetes-pool \
--http-health-check=kube-apiserver-check
gcloud compute target-pools add-instances kubernetes-pool \
--instances controller0,controller1,controller2
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \
--format 'value(address)')
gcloud compute forwarding-rules create kubernetes-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--target-pool kubernetes-pool
AWS
aws elb register-instances-with-load-balancer \
--load-balancer-name kubernetes \
--instances ${CONTROLLER_0_INSTANCE_ID} ${CONTROLLER_1_INSTANCE_ID} ${CONTROLLER_2_INSTANCE_ID}
Azure
az network lb probe create -g kubernetes \
-n kubernetes-apiserver-check \
--lb-name kubernetes-lb \
--protocol http \
--port 8080 \
--path /healthz
az network lb rule create -g kubernetes \
-n kubernetes-apiserver-rule \
--protocol tcp \
--lb-name kubernetes-lb \
--frontend-ip-name LoadBalancerFrontEnd \
--frontend-port 6443 \
--backend-pool-name kubernetes-lb-pool \
--backend-port 6443 \
--probe-name kubernetes-apiserver-check