diff --git a/docs/05-kubernetes-controller.md b/docs/05-kubernetes-controller.md index 45671c6..455b4e1 100644 --- a/docs/05-kubernetes-controller.md +++ b/docs/05-kubernetes-controller.md @@ -12,11 +12,11 @@ In this lab you will also create a frontend load balancer with a public IP addre The Kubernetes components that make up the control plane include the following components: -* Kubernetes API Server -* Kubernetes Scheduler -* Kubernetes Controller Manager +* API Server +* Scheduler +* Controller Manager -Each component is being run on the same machines for the following reasons: +Each component is being run on the same machine for the following reasons: * The Scheduler and Controller Manager are tightly coupled with the API Server * Only one Scheduler and Controller Manager can be active at a given time, but it's ok to run multiple at the same time. Each component will elect a leader via the API Server. @@ -52,16 +52,19 @@ sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem /var/lib/kubernetes/ Download the official Kubernetes release binaries: ``` -wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-apiserver +wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-apiserver ``` + ``` -wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-controller-manager +wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-controller-manager ``` + ``` -wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-scheduler +wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-scheduler ``` + ``` -wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kubectl +wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kubectl ``` Install the Kubernetes binaries: @@ -77,7 +80,7 @@ sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/ ### Kubernetes API Server -### Create the systemd unit file +#### Create the systemd unit file Capture the internal IP address: @@ -130,7 +133,7 @@ ExecStart=/usr/bin/kube-apiserver \\ --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\ --kubelet-https=true \\ --runtime-config=rbac.authorization.k8s.io/v1alpha1 \\ - --service-account-key-file=/var/lib/kubernetes/kubernetes-key.pem \\ + --service-account-key-file=/var/lib/kubernetes/ca-key.pem \\ --service-cluster-ip-range=10.32.0.0/24 \\ --service-node-port-range=30000-32767 \\ --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\ @@ -153,7 +156,13 @@ sudo mv kube-apiserver.service /etc/systemd/system/ ``` sudo systemctl daemon-reload +``` + +``` sudo systemctl enable kube-apiserver +``` + +``` sudo systemctl start kube-apiserver ``` @@ -199,7 +208,13 @@ sudo mv kube-controller-manager.service /etc/systemd/system/ ``` sudo systemctl daemon-reload +``` + +``` sudo systemctl enable kube-controller-manager +``` + +``` sudo systemctl start kube-controller-manager ``` @@ -236,7 +251,13 @@ sudo mv kube-scheduler.service /etc/systemd/system/ ``` sudo systemctl daemon-reload +``` + +``` sudo systemctl enable kube-scheduler +``` + +``` sudo systemctl start kube-scheduler ``` @@ -244,12 +265,12 @@ sudo systemctl start kube-scheduler sudo systemctl status kube-scheduler --no-pager ``` - ### Verification ``` kubectl get componentstatuses ``` + ``` NAME STATUS MESSAGE ERROR controller-manager Healthy ok @@ -268,32 +289,33 @@ The virtual machines created in this tutorial will not have permission to comple ### GCE ``` -gcloud compute http-health-checks create kube-apiserver-check \ +gcloud compute http-health-checks create kube-apiserver-health-check \ --description "Kubernetes API Server Health Check" \ --port 8080 \ --request-path /healthz ``` ``` -gcloud compute target-pools create kubernetes-pool \ - --http-health-check=kube-apiserver-check +gcloud compute target-pools create kubernetes-target-pool \ + --http-health-check=kube-apiserver-health-check ``` ``` -gcloud compute target-pools add-instances kubernetes-pool \ +gcloud compute target-pools add-instances kubernetes-target-pool \ --instances controller0,controller1,controller2 ``` ``` -KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \ +KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ + --region us-central1 \ --format 'value(address)') ``` ``` -gcloud compute forwarding-rules create kubernetes-rule \ +gcloud compute forwarding-rules create kubernetes-forwarding-rule \ --address ${KUBERNETES_PUBLIC_ADDRESS} \ --ports 6443 \ - --target-pool kubernetes-pool \ + --target-pool kubernetes-target-pool \ --region us-central1 ``` @@ -304,17 +326,3 @@ aws elb register-instances-with-load-balancer \ --load-balancer-name kubernetes \ --instances ${CONTROLLER_0_INSTANCE_ID} ${CONTROLLER_1_INSTANCE_ID} ${CONTROLLER_2_INSTANCE_ID} ``` - -## RBAC - -The following command will grant the `kubelet-bootstrap` user the permissions necessary to request a client TLS certificate. - -Bind the `kubelet-bootstrap` user to the `system:node-bootstrapper` cluster role: - -``` -kubectl create clusterrolebinding kubelet-bootstrap \ - --clusterrole=system:node-bootstrapper \ - --user=kubelet-bootstrap -``` - -At this point kubelets can now request a TLS client certificate as defined in the [kubelet TLS bootstrapping guide](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/). diff --git a/docs/06-kubernetes-worker.md b/docs/06-kubernetes-worker.md index a82289a..9cc264a 100644 --- a/docs/06-kubernetes-worker.md +++ b/docs/06-kubernetes-worker.md @@ -15,50 +15,53 @@ Kubernetes worker nodes are responsible for running your containers. All Kuberne Some people would like to run workers and cluster services anywhere in the cluster. This is totally possible, and you'll have to decide what's best for your environment. +## Prerequisites + +Each worker node will provision a unqiue TLS client certificate as defined in the [kubelet TLS bootstrapping guide](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/). The `kubelet-bootstrap` user must be granted permission to request a client TLS certificate. Run the following command on a controller node to enable TLS bootstrapping: + +Bind the `kubelet-bootstrap` user to the `system:node-bootstrapper` cluster role: + +``` +kubectl create clusterrolebinding kubelet-bootstrap \ + --clusterrole=system:node-bootstrapper \ + --user=kubelet-bootstrap +``` ## Provision the Kubernetes Worker Nodes Run the following commands on `worker0`, `worker1`, `worker2`: -### Set the Kubernetes Public Address - -#### GCE - -``` -KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \ - --region=us-central1 \ - --format 'value(address)') -``` - -#### AWS - -``` -KUBERNETES_PUBLIC_ADDRESS=$(aws elb describe-load-balancers \ - --load-balancer-name kubernetes | \ - jq -r '.LoadBalancerDescriptions[].DNSName') -``` - ---- - ``` sudo mkdir -p /var/lib/kubelet ``` ``` -sudo mv bootstrap.kubeconfig kube-proxy.kubeconfig /var/lib/kubelet +sudo mkdir -p /var/lib/kube-proxy ``` -#### Move the TLS certificates in place +``` +sudo mkdir -p /var/run/kubernetes +``` ``` sudo mkdir -p /var/lib/kubernetes ``` +``` +sudo mv bootstrap.kubeconfig /var/lib/kubelet +``` + +``` +sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy +``` + +Move the TLS certificates in place + ``` sudo mv ca.pem /var/lib/kubernetes/ ``` -#### Docker +### Install Docker ``` wget https://get.docker.com/builds/Linux/x86_64/docker-1.12.6.tgz @@ -74,7 +77,6 @@ sudo cp docker/docker* /usr/bin/ Create the Docker systemd unit file: - ``` cat > docker.service < kubelet.service < kube-proxy.service < Use the kubectl describe csr command to view the details of a specific signing request. Approve each certificate signing request using the `kubectl certificate approve` command: ``` -kubectl certificate approve +kubectl certificate approve csr-XXXXX +``` + +``` +certificatesigningrequest "csr-XXXXX" approved ``` Once all certificate signing requests have been approved all nodes should be registered with the cluster: @@ -276,7 +304,7 @@ kubectl get nodes ``` NAME STATUS AGE VERSION -worker0 Ready 7m v1.6.0-beta.4 -worker1 Ready 5m v1.6.0-beta.4 -worker2 Ready 2m v1.6.0-beta.4 +worker0 Ready 7m v1.6.0-rc.1 +worker1 Ready 5m v1.6.0-rc.1 +worker2 Ready 2m v1.6.0-rc.1 ```