add authentication lab

pull/137/head
Kelsey Hightower 2017-03-23 23:08:54 -07:00
parent 5a5314f3c9
commit 99d342cc3c
11 changed files with 203 additions and 91 deletions

View File

@ -18,11 +18,12 @@ The target audience for this tutorial is someone planning to support a productio
## Cluster Details ## Cluster Details
* Kubernetes 1.6.0 * Kubernetes 1.6.0
* Docker 1.12.1 * Docker 1.12.6
* etcd 3.1.4 * etcd 3.1.4
* [CNI Based Networking](https://github.com/containernetworking/cni) * [CNI Based Networking](https://github.com/containernetworking/cni)
* Secure communication between all components (etcd, control plane, workers) * Secure communication between all components (etcd, control plane, workers)
* Default Service Account and Secrets * Default Service Account and Secrets
* RBAC
### What's Missing ### What's Missing
@ -31,7 +32,6 @@ The resulting cluster will be missing the following items:
* [Cluster add-ons](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) * [Cluster add-ons](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
* [Logging](http://kubernetes.io/docs/user-guide/logging) * [Logging](http://kubernetes.io/docs/user-guide/logging)
* [No Cloud Provider Integration](http://kubernetes.io/docs/getting-started-guides/)
### Assumptions ### Assumptions
@ -61,11 +61,12 @@ While GCP or AWS will be used for basic infrastructure needs, the things learned
* [Cloud Infrastructure Provisioning](docs/01-infrastructure.md) * [Cloud Infrastructure Provisioning](docs/01-infrastructure.md)
* [Setting up a CA and TLS Cert Generation](docs/02-certificate-authority.md) * [Setting up a CA and TLS Cert Generation](docs/02-certificate-authority.md)
* [Bootstrapping an H/A etcd cluster](docs/03-etcd.md) * [Setting up authentication](docs/03-authentication.md)
* [Bootstrapping an H/A Kubernetes Control Plane](docs/04-kubernetes-controller.md) * [Bootstrapping an H/A etcd cluster](docs/04-etcd.md)
* [Bootstrapping Kubernetes Workers](docs/05-kubernetes-worker.md) * [Bootstrapping an H/A Kubernetes Control Plane](docs/05-kubernetes-controller.md)
* [Configuring the Kubernetes Client - Remote Access](docs/06-kubectl.md) * [Bootstrapping Kubernetes Workers](docs/06-kubernetes-worker.md)
* [Managing the Container Network Routes](docs/07-network.md) * [Configuring the Kubernetes Client - Remote Access](docs/07-kubectl.md)
* [Deploying the Cluster DNS Add-on](docs/08-dns-addon.md) * [Managing the Container Network Routes](docs/08-network.md)
* [Smoke Test](docs/09-smoke-test.md) * [Deploying the Cluster DNS Add-on](docs/09-dns-addon.md)
* [Cleaning Up](docs/10-cleanup.md) * [Smoke Test](docs/10-smoke-test.md)
* [Cleaning Up](docs/11-cleanup.md)

View File

@ -13,10 +13,14 @@ In this lab you will generate a single set of TLS certificates that can be used
After completing this lab you should have the following TLS keys and certificates: After completing this lab you should have the following TLS keys and certificates:
``` ```
admin.pem
admin-key.pem
ca-key.pem ca-key.pem
ca.pem ca.pem
kubernetes-key.pem kubernetes-key.pem
kubernetes.pem kubernetes.pem
kube-proxy.pem
kube-proxy-key.pem
``` ```
@ -182,6 +186,50 @@ admin.csr
admin.pem admin.pem
``` ```
Create the `kube-proxy-csr.json` file:
```
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Cluster",
"ST": "Oregon"
}
]
}
EOF
```
Generate the node-proxier certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
```
Results:
```
kube-proxy-key.pem
kube-proxy.csr
kube-proxy.pem
```
Create the `kubernetes-csr.json` file: Create the `kubernetes-csr.json` file:
``` ```

110
docs/03-auth-configs.md Normal file
View File

@ -0,0 +1,110 @@
# Setting up Authentication
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \
--format 'value(address)')
```
## Authentication
* kubelet (client)
* Kubernetes API Server (server)
The other components, mainly the `scheduler` and `controller manager`, access the Kubernetes API server locally over the insecure API port which does not require authentication. The insecure port is only enabled for local access.
Generate a token:
BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
Generate a token file:
```
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
```
Copy the `token.csv` file to each controller node:
```
KUBERNETES_CONTROLLERS=(controller0 controller1 controller2)
```
```
for host in ${KUBERNETES_CONTROLLERS[*]}; do
gcloud compute copy-files token.csv ${host}:~/
done
```
## Client Authentication Configs
### bootstrap kubeconfig
Generate a bootstrap kubeconfig file:
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=bootstrap.kubeconfig
```
```
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
```
```
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
```
```
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
```
### kube-proxy kubeconfig
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
```
```
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
```
### Distribute client authentication configs
Copy the bootstrap kubeconfig file to each worker node:
```
KUBERNETES_WORKER_NODES=(worker0 worker1 worker2)
```
```
for host in ${KUBERNETES_WORKER_NODES[*]}; do
gcloud compute copy-files bootstrap.kubeconfig kube-proxy.kubeconfig ${host}:~/
done
```

View File

@ -151,7 +151,7 @@ Once all 3 etcd nodes have been bootstrapped verify the etcd cluster is healthy:
* On one of the controller nodes run the following command: * On one of the controller nodes run the following command:
``` ```
etcdctl \ sudo etcdctl \
--ca-file=/etc/etcd/ca.pem \ --ca-file=/etc/etcd/ca.pem \
--cert-file=/etc/etcd/kubernetes.pem \ --cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \ --key-file=/etc/etcd/kubernetes-key.pem \

View File

@ -23,84 +23,16 @@ Each component is being run on the same machines for the following reasons:
* Running multiple copies of each component is required for H/A * Running multiple copies of each component is required for H/A
* Running each component next to the API Server eases configuration. * Running each component next to the API Server eases configuration.
## Setup Authentication and Authorization
### Authentication
[Token based authentication](http://kubernetes.io/docs/admin/authentication) will be used to bootstrap the Kubernetes cluster. The authentication token is used by the following components:
* kubelet (client)
* Kubernetes API Server (server)
The other components, mainly the `scheduler` and `controller manager`, access the Kubernetes API server locally over the insecure API port which does not require authentication. The insecure port is only enabled for local access.
Generate a token:
BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
Generate a token file:
```
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
```
Copy the `token.csv` file to each controller node:
```
KUBERNETES_CONTROLLERS=(controller0 controller1 controller2)
```
```
for host in ${KUBERNETES_CONTROLLERS[*]}; do
gcloud compute copy-files token.csv ${host}:~/
done
```
Generate a bootstrap kubeconfig file:
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \
--format 'value(address)')
```
cat > bootstrap.kubeconfig <<EOF
apiVersion: v1
kind: Config
clusters:
- name: kubernetes
cluster:
certificate-authority: /var/lib/kubernetes/ca.pem
server: https://${KUBERNETES_PUBLIC_ADDRESS}:6443
contexts:
- name: kubelet-bootstrap
context:
cluster: kubernetes
user: kubelet-bootstrap
current-context: kubelet-bootstrap
users:
- name: kubelet-bootstrap
user:
token: ${BOOTSTRAP_TOKEN}
EOF
```
Copy the bootstrap kubeconfig file to each worker node:
```
KUBERNETES_WORKER_NODES=(worker0 worker1 worker2)
```
```
for host in ${KUBERNETES_WORKER_NODES[*]}; do
gcloud compute copy-files bootstrap.kubeconfig ${host}:~/
done
```
## Provision the Kubernetes Controller Cluster ## Provision the Kubernetes Controller Cluster
Run the following commands on `controller0`, `controller1`, `controller2`: Run the following commands on `controller0`, `controller1`, `controller2`:
Copy the bootstrap token into place: Copy the bootstrap token into place:
```
sudo mkdir -p /var/lib/kubernetes/
```
``` ```
sudo mv token.csv /var/lib/kubernetes/ sudo mv token.csv /var/lib/kubernetes/
``` ```
@ -111,10 +43,6 @@ The TLS certificates created in the [Setting up a CA and TLS Cert Generation](02
Copy the TLS certificates to the Kubernetes configuration directory: Copy the TLS certificates to the Kubernetes configuration directory:
```
sudo mkdir -p /var/lib/kubernetes
```
``` ```
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem /var/lib/kubernetes/ sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem /var/lib/kubernetes/
``` ```
@ -161,7 +89,7 @@ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
``` ```
``` ```
CLOUD_PROVIDER=gcp CLOUD_PROVIDER=gce
``` ```
#### AWS #### AWS
@ -374,7 +302,8 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \
gcloud compute forwarding-rules create kubernetes-rule \ gcloud compute forwarding-rules create kubernetes-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \ --address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \ --ports 6443 \
--target-pool kubernetes-pool --target-pool kubernetes-pool \
--region us-central1
``` ```
### AWS ### AWS
@ -389,6 +318,10 @@ aws elb register-instances-with-load-balancer \
Set up bootstrapping roles: Set up bootstrapping roles:
```
gcloud compute ssh controller0
```
``` ```
kubectl create clusterrolebinding kubelet-bootstrap \ kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \ --clusterrole=system:node-bootstrapper \

View File

@ -26,6 +26,7 @@ Run the following commands on `worker0`, `worker1`, `worker2`:
``` ```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \
--region=us-central1 \
--format 'value(address)') --format 'value(address)')
``` ```
@ -44,7 +45,7 @@ sudo mkdir -p /var/lib/kubelet
``` ```
``` ```
sudo mv bootstrap.kubeconfig /var/lib/kubelet sudo mv bootstrap.kubeconfig kube-proxy.kubeconfig /var/lib/kubelet
``` ```
#### Move the TLS certificates in place #### Move the TLS certificates in place
@ -187,6 +188,10 @@ EOF
sudo mv kubelet.service /etc/systemd/system/kubelet.service sudo mv kubelet.service /etc/systemd/system/kubelet.service
``` ```
```
sudo chmod +w /var/run/kubernetes
```
``` ```
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable kubelet sudo systemctl enable kubelet
@ -197,6 +202,20 @@ sudo systemctl start kubelet
sudo systemctl status kubelet --no-pager sudo systemctl status kubelet --no-pager
``` ```
Approve the certificate:
```
gcloud compute ssh controller0
```
```
kubectl get csr
```
```
kubectl certificate approve <csr-name>
```
#### kube-proxy #### kube-proxy
@ -210,7 +229,7 @@ Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service] [Service]
ExecStart=/usr/bin/kube-proxy \\ ExecStart=/usr/bin/kube-proxy \\
--master=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\ --master=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\ --kubeconfig=/var/lib/kubelet/kube-proxy.kubeconfig \\
--proxy-mode=iptables \\ --proxy-mode=iptables \\
--v=2 --v=2
Restart=on-failure Restart=on-failure
@ -218,6 +237,7 @@ RestartSec=5
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
EOF
``` ```
``` ```