kubernetes-the-hard-way/docs/03-auth-configs.md

174 lines
4.5 KiB
Markdown
Raw Normal View History

2017-03-24 09:08:54 +03:00
# Setting up Authentication
2017-03-24 14:08:34 +03:00
In this lab you will setup the necessary authentication configs to enable Kubernetes clients to bootstrap and authenticate using RBAC (Role-Based Access Control).
## Download and Install kubectl
The kubectl client will be used to generate kubeconfig files which will be consumed by the kubelet and kube-proxy services.
### OS X
2017-03-24 09:08:54 +03:00
```
2017-03-24 14:08:34 +03:00
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin
```
### Linux
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin
2017-03-24 09:08:54 +03:00
```
## Authentication
2017-03-24 14:08:34 +03:00
The following components will leverge Kubernetes RBAC:
2017-03-24 09:08:54 +03:00
* kubelet (client)
* Kubernetes API Server (server)
The other components, mainly the `scheduler` and `controller manager`, access the Kubernetes API server locally over the insecure API port which does not require authentication. The insecure port is only enabled for local access.
2017-03-24 14:08:34 +03:00
### TLS Bootstrap Token
This section will walk you through the creation of a TLS bootstrap token that will be used to [bootstrap TLS client certificates for kubelets](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/).
2017-03-24 09:08:54 +03:00
Generate a token:
BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
Generate a token file:
```
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
```
2017-03-24 14:08:34 +03:00
#### Distribute the bootstrap token file
2017-03-24 09:08:54 +03:00
Copy the `token.csv` file to each controller node:
```
KUBERNETES_CONTROLLERS=(controller0 controller1 controller2)
```
```
for host in ${KUBERNETES_CONTROLLERS[*]}; do
gcloud compute copy-files token.csv ${host}:~/
done
```
2017-03-24 14:08:34 +03:00
### Client Authentication Configs
2017-03-24 09:08:54 +03:00
2017-03-24 14:08:34 +03:00
This section will walk you through creating kubeconfig files that will be used to bootstrap kubelets, which will then generate their own kubeconfigs based on dynamically generated certificates, and a kubeconfig for authenticating kube-proxy clients.
Each kubeconfig requires a Kubernetes master to connect to. To support H/A the IP address assigned to the load balancer sitting in front of the Kubernetes API servers will be used.
#### Set the Kubernetes Public Address
##### GCE
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \
--region us-central1 \
--format 'value(address)')
```
##### AWS
```
KUBERNETES_PUBLIC_ADDRESS=$(aws elb describe-load-balancers \
--load-balancer-name kubernetes | \
jq -r '.LoadBalancerDescriptions[].DNSName')
```
---
2017-03-24 09:08:54 +03:00
Generate a bootstrap kubeconfig file:
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=bootstrap.kubeconfig
```
```
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
```
```
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
```
```
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
```
### kube-proxy kubeconfig
2017-03-24 14:08:34 +03:00
Generate the `kube-proxy` kubeconfig file:
2017-03-24 09:08:54 +03:00
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
```
2017-03-24 14:08:34 +03:00
#### Distribute client kubeconfig files
2017-03-24 09:08:54 +03:00
2017-03-24 14:08:34 +03:00
Copy the bootstrap and kube-proxy kubeconfig files to each worker node:
2017-03-24 09:08:54 +03:00
```
KUBERNETES_WORKER_NODES=(worker0 worker1 worker2)
```
2017-03-24 14:08:34 +03:00
##### GCE
2017-03-24 09:08:54 +03:00
```
for host in ${KUBERNETES_WORKER_NODES[*]}; do
gcloud compute copy-files bootstrap.kubeconfig kube-proxy.kubeconfig ${host}:~/
done
```
2017-03-24 14:08:34 +03:00
##### AWS
```
for host in ${KUBERNETES_WORKER_NODES[*]}; do
PUBLIC_IP_ADDRESS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=${host}" | \
jq -r '.Reservations[].Instances[].PublicIpAddress')
scp -o "StrictHostKeyChecking no" bootstrap.kubeconfig kube-proxy.kubeconfig \
ubuntu@${PUBLIC_IP_ADDRESS}:~/
done
```