doc clean up and basic formatting improvements

pull/137/head
Kelsey Hightower 2017-03-25 18:52:58 -07:00
parent 8022f4077b
commit f49493d286
6 changed files with 22 additions and 61 deletions

View File

@ -1,8 +1,6 @@
# Setting up a Certificate Authority and TLS Cert Generation # Setting up a Certificate Authority and Creating TLS Certificates
In this lab you will setup the necessary PKI infrastructure to secure the Kubernetes components. This lab will leverage CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), to bootstrap a Certificate Authority and generate TLS certificates. In this lab you will setup the necessary PKI infrastructure to secure the Kubernetes components. This lab will leverage CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), to bootstrap a Certificate Authority and generate TLS certificates to secure the following Kubernetes components:
In this lab you will generate a set of TLS certificates that can be used to secure the following Kubernetes components:
* etcd * etcd
* kube-apiserver * kube-apiserver
@ -22,7 +20,6 @@ kube-proxy.pem
kube-proxy-key.pem kube-proxy-key.pem
``` ```
## Install CFSSL ## Install CFSSL
This lab requires the `cfssl` and `cfssljson` binaries. Download them from the [cfssl repository](https://pkg.cfssl.org). This lab requires the `cfssl` and `cfssljson` binaries. Download them from the [cfssl repository](https://pkg.cfssl.org).
@ -101,7 +98,7 @@ cat > ca-csr.json <<EOF
EOF EOF
``` ```
Generate the CA certificate and private key: Generate a CA certificate and private key:
``` ```
cfssl gencert -initca ca-csr.json | cfssljson -bare ca cfssl gencert -initca ca-csr.json | cfssljson -bare ca
@ -116,8 +113,7 @@ ca.pem
## Generate client and server TLS certificates ## Generate client and server TLS certificates
In this section we will generate TLS certificates for all each Kubernetes component and a client certificate for an admin client. In this section we will generate TLS certificates for each Kubernetes component and a client certificate for the admin user.
### Create the Admin client certificate ### Create the Admin client certificate
@ -209,8 +205,6 @@ kube-proxy.pem
### Create the kubernetes server certificate ### Create the kubernetes server certificate
Set the Kubernetes Public IP Address
The Kubernetes public IP address will be included in the list of subject alternative names for the Kubernetes server certificate. This will ensure the TLS certificate is valid for remote client access. The Kubernetes public IP address will be included in the list of subject alternative names for the Kubernetes server certificate. This will ensure the TLS certificate is valid for remote client access.
``` ```
@ -219,9 +213,7 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-har
--format 'value(address)') --format 'value(address)')
``` ```
--- Create the Kubernetes server certificate signing request:
Create the kubernetes server certificate signing request:
``` ```
cat > kubernetes-csr.json <<EOF cat > kubernetes-csr.json <<EOF
@ -232,9 +224,6 @@ cat > kubernetes-csr.json <<EOF
"10.240.0.10", "10.240.0.10",
"10.240.0.11", "10.240.0.11",
"10.240.0.12", "10.240.0.12",
"ip-10-240-0-10",
"ip-10-240-0-11",
"ip-10-240-0-12",
"${KUBERNETES_PUBLIC_ADDRESS}", "${KUBERNETES_PUBLIC_ADDRESS}",
"127.0.0.1", "127.0.0.1",
"kubernetes.default" "kubernetes.default"
@ -278,26 +267,16 @@ kubernetes.pem
Set the list of Kubernetes hosts where the certs should be copied to: Set the list of Kubernetes hosts where the certs should be copied to:
``` The following commands will copy the TLS certificates and keys to each Kubernetes host using the `gcloud compute copy-files` command.
KUBERNETES_WORKERS=(worker0 worker1 worker2)
```
``` ```
KUBERNETES_CONTROLLERS=(controller0 controller1 controller2) for host in worker0 worker1 worker2; do
```
The following command will:
* Copy the TLS certificates and keys to each Kubernetes host using the `gcloud compute copy-files` command.
```
for host in ${KUBERNETES_WORKERS[*]}; do
gcloud compute copy-files ca.pem kube-proxy.pem kube-proxy-key.pem ${host}:~/ gcloud compute copy-files ca.pem kube-proxy.pem kube-proxy-key.pem ${host}:~/
done done
``` ```
``` ```
for host in ${KUBERNETES_CONTROLLERS[*]}; do for host in controller0 controller1 controller2; do
gcloud compute copy-files ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem ${host}:~/ gcloud compute copy-files ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem ${host}:~/
done done
``` ```

View File

@ -51,11 +51,7 @@ EOF
Distribute the bootstrap token file to each controller node: Distribute the bootstrap token file to each controller node:
``` ```
KUBERNETES_CONTROLLERS=(controller0 controller1 controller2) for host in controller0 controller1 controller2; do
```
```
for host in ${KUBERNETES_CONTROLLERS[*]}; do
gcloud compute copy-files token.csv ${host}:~/ gcloud compute copy-files token.csv ${host}:~/
done done
``` ```
@ -136,11 +132,7 @@ kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
## Distribute the client kubeconfig files ## Distribute the client kubeconfig files
``` ```
KUBERNETES_WORKERS=(worker0 worker1 worker2) for host in worker0 worker1 worker2; do
```
```
for host in ${KUBERNETES_WORKERS[*]}; do
gcloud compute copy-files bootstrap.kubeconfig kube-proxy.kubeconfig ${host}:~/ gcloud compute copy-files bootstrap.kubeconfig kube-proxy.kubeconfig ${host}:~/
done done
``` ```

View File

@ -27,6 +27,10 @@ Each component is being run on the same machine for the following reasons:
Run the following commands on `controller0`, `controller1`, `controller2`: Run the following commands on `controller0`, `controller1`, `controller2`:
> Login to each machine using the gcloud compute ssh command
---
Copy the bootstrap token into place: Copy the bootstrap token into place:
``` ```
@ -79,18 +83,13 @@ sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
### Kubernetes API Server ### Kubernetes API Server
Capture the internal IP address of the machine:
#### Create the systemd unit file
Capture the internal IP address:
``` ```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
---
Create the systemd unit file: Create the systemd unit file:
``` ```

View File

@ -17,9 +17,13 @@ Some people would like to run workers and cluster services anywhere in the clust
## Prerequisites ## Prerequisites
Each worker node will provision a unqiue TLS client certificate as defined in the [kubelet TLS bootstrapping guide](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/). The `kubelet-bootstrap` user must be granted permission to request a client TLS certificate. Run the following command on a controller node to enable TLS bootstrapping: Each worker node will provision a unqiue TLS client certificate as defined in the [kubelet TLS bootstrapping guide](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/). The `kubelet-bootstrap` user must be granted permission to request a client TLS certificate.
Bind the `kubelet-bootstrap` user to the `system:node-bootstrapper` cluster role: ```
gcloud compute ssh controller0
```
Enable TLS bootstrapping by binding the `kubelet-bootstrap` user to the `system:node-bootstrapper` cluster role:
``` ```
kubectl create clusterrolebinding kubelet-bootstrap \ kubectl create clusterrolebinding kubelet-bootstrap \
@ -32,21 +36,13 @@ kubectl create clusterrolebinding kubelet-bootstrap \
Run the following commands on `worker0`, `worker1`, `worker2`: Run the following commands on `worker0`, `worker1`, `worker2`:
``` ```
sudo mkdir -p /var/lib/kubelet sudo mkdir -p /var/lib/{kubelet,kube-proxy,kubernetes}
```
```
sudo mkdir -p /var/lib/kube-proxy
``` ```
``` ```
sudo mkdir -p /var/run/kubernetes sudo mkdir -p /var/run/kubernetes
``` ```
```
sudo mkdir -p /var/lib/kubernetes
```
``` ```
sudo mv bootstrap.kubeconfig /var/lib/kubelet sudo mv bootstrap.kubeconfig /var/lib/kubelet
``` ```

View File

@ -74,7 +74,6 @@ etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"}
``` ```
``` ```
kubectl get nodes kubectl get nodes
``` ```

View File

@ -19,8 +19,6 @@ kubectl create clusterrolebinding serviceaccounts-cluster-admin \
kubectl create -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/services/kubedns.yaml kubectl create -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/services/kubedns.yaml
``` ```
#### Verification
``` ```
kubectl --namespace=kube-system get svc kubectl --namespace=kube-system get svc
``` ```
@ -36,8 +34,6 @@ kube-dns 10.32.0.10 <none> 53/UDP,53/TCP 5s
kubectl create -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/kubedns.yaml kubectl create -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/kubedns.yaml
``` ```
#### Verification
``` ```
kubectl --namespace=kube-system get pods kubectl --namespace=kube-system get pods
``` ```