406 lines
12 KiB
Markdown
406 lines
12 KiB
Markdown
# Bootstrapping the Kubernetes Control Plane
|
|
|
|
In this chapter, you will bootstrap the Kubernetes control plane across three virtual machines and configure it for high availability. You will also create an load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
|
|
|
|
## Provision the Kubernetes Control Plane
|
|
|
|
### Download and Distribute the Kubernetes Controller Binaries
|
|
|
|
In `client-1`, Download and distribute the official Kubernetes release binaries:
|
|
|
|
```
|
|
$ {wget -q --show-progress --https-only --timestamping \
|
|
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \
|
|
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \
|
|
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \
|
|
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl"
|
|
$ for num in 1 2 3; do
|
|
scp -i ~/.ssh/id_rsa-k8s kube-apiserver kube-controller-manager kube-scheduler kubectl ${USER}@10.240.0.1${num}:~/
|
|
done
|
|
```
|
|
|
|
|
|
### Running commands in parallel with tmux
|
|
|
|
After this section, the commands must be run on each controller node: `controller-1`, `controller-2`, and `controller-3`. Login to each controller node:
|
|
|
|
```
|
|
$ ssh -i ~/.ssh/id_rsa-k8s 10.240.0.11
|
|
```
|
|
|
|
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple virtual machines at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
|
|
|
|
|
|
### Install the Kubernetes Controller Binaries
|
|
|
|
Create the Kubernetes configuration directory:
|
|
|
|
```
|
|
$ sudo mkdir -p /etc/kubernetes/config
|
|
```
|
|
|
|
Install the Kubernetes binaries:
|
|
|
|
```
|
|
$ {
|
|
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
|
|
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
|
|
}
|
|
```
|
|
|
|
### Configure the Kubernetes API Server
|
|
|
|
```
|
|
$ {
|
|
sudo mkdir -p /var/lib/kubernetes/
|
|
|
|
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
|
|
service-account-key.pem service-account.pem \
|
|
encryption-config.yaml /var/lib/kubernetes/
|
|
}
|
|
```
|
|
|
|
The instance internal IP address will be used to advertise the API Server to members of the cluster. Get the internal IP address for the current compute instance:
|
|
|
|
```
|
|
$ INTERNAL_IP=$(ip a s | grep 'inet 10' | awk '{ print $2 }' | awk -F"/" '{ print $1 }')
|
|
```
|
|
|
|
Create the `kube-apiserver.service` systemd unit file:
|
|
|
|
```
|
|
$ cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
|
|
[Unit]
|
|
Description=Kubernetes API Server
|
|
Documentation=https://github.com/kubernetes/kubernetes
|
|
|
|
[Service]
|
|
ExecStart=/usr/local/bin/kube-apiserver \\
|
|
--advertise-address=${INTERNAL_IP} \\
|
|
--allow-privileged=true \\
|
|
--apiserver-count=3 \\
|
|
--audit-log-maxage=30 \\
|
|
--audit-log-maxbackup=3 \\
|
|
--audit-log-maxsize=100 \\
|
|
--audit-log-path=/var/log/audit.log \\
|
|
--authorization-mode=Node,RBAC \\
|
|
--bind-address=0.0.0.0 \\
|
|
--client-ca-file=/var/lib/kubernetes/ca.pem \\
|
|
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
|
|
--enable-swagger-ui=true \\
|
|
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
|
|
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
|
|
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
|
|
--etcd-servers=https://10.240.0.11:2379,https://10.240.0.12:2379,https://10.240.0.13:2379 \\
|
|
--event-ttl=1h \\
|
|
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
|
|
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
|
|
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
|
|
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
|
|
--kubelet-https=true \\
|
|
--runtime-config=api/all \\
|
|
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
|
|
--service-cluster-ip-range=10.32.0.0/24 \\
|
|
--service-node-port-range=30000-32767 \\
|
|
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
|
|
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
|
|
--v=2
|
|
Restart=on-failure
|
|
RestartSec=5
|
|
|
|
[Install]
|
|
WantedBy=multi-user.target
|
|
EOF
|
|
```
|
|
|
|
### Configure the Kubernetes Controller Manager
|
|
|
|
Move the `kube-controller-manager` kubeconfig into place:
|
|
|
|
```
|
|
$ sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
|
```
|
|
|
|
Create the `kube-controller-manager.service` systemd unit file:
|
|
|
|
```
|
|
$ cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
|
|
[Unit]
|
|
Description=Kubernetes Controller Manager
|
|
Documentation=https://github.com/kubernetes/kubernetes
|
|
|
|
[Service]
|
|
ExecStart=/usr/local/bin/kube-controller-manager \\
|
|
--address=0.0.0.0 \\
|
|
--cluster-cidr=10.200.0.0/16 \\
|
|
--cluster-name=kubernetes \\
|
|
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
|
|
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
|
|
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
|
|
--leader-elect=true \\
|
|
--root-ca-file=/var/lib/kubernetes/ca.pem \\
|
|
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
|
|
--service-cluster-ip-range=10.32.0.0/24 \\
|
|
--use-service-account-credentials=true \\
|
|
--v=2
|
|
Restart=on-failure
|
|
RestartSec=5
|
|
|
|
[Install]
|
|
WantedBy=multi-user.target
|
|
EOF
|
|
```
|
|
|
|
### Configure the Kubernetes Scheduler
|
|
|
|
Move the `kube-scheduler` kubeconfig into place:
|
|
|
|
```
|
|
$ sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
|
```
|
|
|
|
Create the `kube-scheduler.yaml` configuration file:
|
|
|
|
```
|
|
$ cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
|
|
apiVersion: componentconfig/v1alpha1
|
|
kind: KubeSchedulerConfiguration
|
|
clientConnection:
|
|
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
|
|
leaderElection:
|
|
leaderElect: true
|
|
EOF
|
|
```
|
|
|
|
Create the `kube-scheduler.service` systemd unit file:
|
|
|
|
```
|
|
$ cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
|
|
[Unit]
|
|
Description=Kubernetes Scheduler
|
|
Documentation=https://github.com/kubernetes/kubernetes
|
|
|
|
[Service]
|
|
ExecStart=/usr/local/bin/kube-scheduler \\
|
|
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
|
|
--v=2
|
|
Restart=on-failure
|
|
RestartSec=5
|
|
|
|
[Install]
|
|
WantedBy=multi-user.target
|
|
EOF
|
|
```
|
|
|
|
### Start the Controller Services
|
|
|
|
```
|
|
$ {
|
|
sudo systemctl daemon-reload
|
|
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
|
|
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
|
|
}
|
|
```
|
|
|
|
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
|
|
|
|
|
|
## Verification
|
|
|
|
```
|
|
$ kubectl get componentstatuses --kubeconfig admin.kubeconfig
|
|
```
|
|
|
|
```
|
|
NAME STATUS MESSAGE ERROR
|
|
controller-manager Healthy ok
|
|
scheduler Healthy ok
|
|
etcd-2 Healthy {"health": "true"}
|
|
etcd-0 Healthy {"health": "true"}
|
|
etcd-1 Healthy {"health": "true"}
|
|
```
|
|
|
|
|
|
## RBAC for Kubelet Authorization
|
|
|
|
In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
|
|
|
|
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
|
|
|
|
Login to `controller-1`:
|
|
|
|
```
|
|
$ ssh -i ~/.ssh/id_rsa-k8s 10.240.0.11
|
|
```
|
|
|
|
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
|
|
|
```
|
|
$ cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
kind: ClusterRole
|
|
metadata:
|
|
annotations:
|
|
rbac.authorization.kubernetes.io/autoupdate: "true"
|
|
labels:
|
|
kubernetes.io/bootstrapping: rbac-defaults
|
|
name: system:kube-apiserver-to-kubelet
|
|
rules:
|
|
- apiGroups:
|
|
- ""
|
|
resources:
|
|
- nodes/proxy
|
|
- nodes/stats
|
|
- nodes/log
|
|
- nodes/spec
|
|
- nodes/metrics
|
|
verbs:
|
|
- "*"
|
|
EOF
|
|
```
|
|
|
|
The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
|
|
|
|
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
|
|
|
|
```
|
|
$ cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
kind: ClusterRoleBinding
|
|
metadata:
|
|
name: system:kube-apiserver
|
|
namespace: ""
|
|
roleRef:
|
|
apiGroup: rbac.authorization.k8s.io
|
|
kind: ClusterRole
|
|
name: system:kube-apiserver-to-kubelet
|
|
subjects:
|
|
- apiGroup: rbac.authorization.k8s.io
|
|
kind: User
|
|
name: kubernetes
|
|
EOF
|
|
```
|
|
|
|
## The Kubernetes Frontend Load Balancer
|
|
|
|
In this section you will setup a load balancer to front the Kubernetes API Servers.
|
|
|
|
|
|
### Setting up a Load Balancer
|
|
|
|
Login to the load balancer:
|
|
|
|
```
|
|
$ ssh -i ~/.ssh/id_rsa-k8s 10.240.0.10
|
|
```
|
|
|
|
|
|
Install the required packages:
|
|
|
|
```
|
|
$ sudo apt-get install -y haproxy
|
|
```
|
|
|
|
|
|
Edit `haproxy.cfg`:
|
|
|
|
```
|
|
$ cat << EOF | sudo tee /etc/haproxy/haproxy.cfg
|
|
global
|
|
log /dev/log local0
|
|
log /dev/log local1 notice
|
|
chroot /var/lib/haproxy
|
|
stats socket /run/haproxy/admin.sock mode 660 level admin
|
|
stats timeout 30s
|
|
user haproxy
|
|
group haproxy
|
|
daemon
|
|
|
|
# Default SSL material locations
|
|
ca-base /etc/ssl/certs
|
|
crt-base /etc/ssl/private
|
|
|
|
# Default ciphers to use on SSL-enabled listening sockets.
|
|
# For more information, see ciphers(1SSL). This list is from:
|
|
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
|
|
ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
|
|
ssl-default-bind-options no-sslv3
|
|
|
|
defaults
|
|
log global
|
|
mode http
|
|
option httplog
|
|
option dontlognull
|
|
timeout connect 5000
|
|
timeout client 50000
|
|
timeout server 50000
|
|
errorfile 400 /etc/haproxy/errors/400.http
|
|
errorfile 403 /etc/haproxy/errors/403.http
|
|
errorfile 408 /etc/haproxy/errors/408.http
|
|
errorfile 500 /etc/haproxy/errors/500.http
|
|
errorfile 502 /etc/haproxy/errors/502.http
|
|
errorfile 503 /etc/haproxy/errors/503.http
|
|
errorfile 504 /etc/haproxy/errors/504.http
|
|
|
|
frontend haproxynode
|
|
bind *:6443
|
|
mode tcp
|
|
default_backend backendnodes
|
|
|
|
backend backendnodes
|
|
mode tcp
|
|
balance roundrobin
|
|
option tcp-check
|
|
option log-health-checks
|
|
server node1 10.240.0.11:6443 check
|
|
server node2 10.240.0.12:6443 check
|
|
server node3 10.240.0.13:6443 check
|
|
|
|
listen stats
|
|
bind :32700
|
|
stats enable
|
|
stats uri /
|
|
stats hide-version
|
|
stats auth someuser:password
|
|
EOF
|
|
$
|
|
```
|
|
|
|
|
|
Enable and start `haproxy` service:
|
|
|
|
```
|
|
$ {
|
|
sudo systemctl enable haproxy
|
|
sudo systemctl stop haproxy
|
|
sudo systemctl start haproxy
|
|
}
|
|
```
|
|
|
|
|
|
### Verification
|
|
|
|
Login to the one of the controller nodes, and make a HTTP request for the Kubernetes version info:
|
|
|
|
```
|
|
$ curl --cacert /var/lib/kubernetes/ca.pem https://10.240.0.10:6443/version
|
|
```
|
|
|
|
> output
|
|
|
|
```
|
|
{
|
|
"major": "1",
|
|
"minor": "12",
|
|
"gitVersion": "v1.12.0",
|
|
"gitCommit": "0ed33881dc4355495f623c6f22e7dd0b7632b7c0",
|
|
"gitTreeState": "clean",
|
|
"buildDate": "2018-09-27T16:55:41Z",
|
|
"goVersion": "go1.10.4",
|
|
"compiler": "gc",
|
|
"platform": "linux/amd64"
|
|
}
|
|
```
|
|
|
|
Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md)
|