add docs on using custom kubeconfigs

pull/137/head
Kelsey Hightower 2017-03-24 04:08:34 -07:00
parent 99d342cc3c
commit 94cbe1e683
2 changed files with 75 additions and 12 deletions

View File

@ -1,17 +1,40 @@
# Setting up Authentication # Setting up Authentication
In this lab you will setup the necessary authentication configs to enable Kubernetes clients to bootstrap and authenticate using RBAC (Role-Based Access Control).
## Download and Install kubectl
The kubectl client will be used to generate kubeconfig files which will be consumed by the kubelet and kube-proxy services.
### OS X
``` ```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \ wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/darwin/amd64/kubectl
--format 'value(address)') chmod +x kubectl
sudo mv kubectl /usr/local/bin
```
### Linux
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin
``` ```
## Authentication ## Authentication
The following components will leverge Kubernetes RBAC:
* kubelet (client) * kubelet (client)
* Kubernetes API Server (server) * Kubernetes API Server (server)
The other components, mainly the `scheduler` and `controller manager`, access the Kubernetes API server locally over the insecure API port which does not require authentication. The insecure port is only enabled for local access. The other components, mainly the `scheduler` and `controller manager`, access the Kubernetes API server locally over the insecure API port which does not require authentication. The insecure port is only enabled for local access.
### TLS Bootstrap Token
This section will walk you through the creation of a TLS bootstrap token that will be used to [bootstrap TLS client certificates for kubelets](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/).
Generate a token: Generate a token:
BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
@ -24,6 +47,8 @@ ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF EOF
``` ```
#### Distribute the bootstrap token file
Copy the `token.csv` file to each controller node: Copy the `token.csv` file to each controller node:
``` ```
@ -35,9 +60,31 @@ for host in ${KUBERNETES_CONTROLLERS[*]}; do
done done
``` ```
## Client Authentication Configs ### Client Authentication Configs
### bootstrap kubeconfig This section will walk you through creating kubeconfig files that will be used to bootstrap kubelets, which will then generate their own kubeconfigs based on dynamically generated certificates, and a kubeconfig for authenticating kube-proxy clients.
Each kubeconfig requires a Kubernetes master to connect to. To support H/A the IP address assigned to the load balancer sitting in front of the Kubernetes API servers will be used.
#### Set the Kubernetes Public Address
##### GCE
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \
--region us-central1 \
--format 'value(address)')
```
##### AWS
```
KUBERNETES_PUBLIC_ADDRESS=$(aws elb describe-load-balancers \
--load-balancer-name kubernetes | \
jq -r '.LoadBalancerDescriptions[].DNSName')
```
---
Generate a bootstrap kubeconfig file: Generate a bootstrap kubeconfig file:
@ -68,6 +115,8 @@ kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
### kube-proxy kubeconfig ### kube-proxy kubeconfig
Generate the `kube-proxy` kubeconfig file:
``` ```
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
@ -95,16 +144,30 @@ kubectl config set-context default \
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
``` ```
### Distribute client authentication configs #### Distribute client kubeconfig files
Copy the bootstrap kubeconfig file to each worker node: Copy the bootstrap and kube-proxy kubeconfig files to each worker node:
``` ```
KUBERNETES_WORKER_NODES=(worker0 worker1 worker2) KUBERNETES_WORKER_NODES=(worker0 worker1 worker2)
``` ```
##### GCE
``` ```
for host in ${KUBERNETES_WORKER_NODES[*]}; do for host in ${KUBERNETES_WORKER_NODES[*]}; do
gcloud compute copy-files bootstrap.kubeconfig kube-proxy.kubeconfig ${host}:~/ gcloud compute copy-files bootstrap.kubeconfig kube-proxy.kubeconfig ${host}:~/
done done
``` ```
##### AWS
```
for host in ${KUBERNETES_WORKER_NODES[*]}; do
PUBLIC_IP_ADDRESS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=${host}" | \
jq -r '.Reservations[].Instances[].PublicIpAddress')
scp -o "StrictHostKeyChecking no" bootstrap.kubeconfig kube-proxy.kubeconfig \
ubuntu@${PUBLIC_IP_ADDRESS}:~/
done
```

View File

@ -60,13 +60,13 @@ kubectl config set-credentials admin \
``` ```
``` ```
kubectl config set-context default-context \ kubectl config set-context default \
--cluster=kubernetes-the-hard-way \ --cluster=kubernetes-the-hard-way \
--user=admin --user=admin
``` ```
``` ```
kubectl config use-context default-context kubectl config use-context default
``` ```
At this point you should be able to connect securly to the remote API server: At this point you should be able to connect securly to the remote API server:
@ -88,8 +88,8 @@ etcd-1 Healthy {"health": "true"}
kubectl get nodes kubectl get nodes
``` ```
``` ```
NAME STATUS AGE NAME STATUS AGE VERSION
worker0 Ready 7m worker0 Ready 7m v1.6.0-beta.4
worker1 Ready 5m worker1 Ready 5m v1.6.0-beta.4
worker2 Ready 2m worker2 Ready 2m v1.6.0-beta.4
``` ```