update to Kubernetes 1.6

pull/137/head
Kelsey Hightower 2017-03-25 11:41:26 -07:00
parent e9c25522a4
commit f62e9c9777
2 changed files with 109 additions and 73 deletions

View File

@ -12,11 +12,11 @@ In this lab you will also create a frontend load balancer with a public IP addre
The Kubernetes components that make up the control plane include the following components: The Kubernetes components that make up the control plane include the following components:
* Kubernetes API Server * API Server
* Kubernetes Scheduler * Scheduler
* Kubernetes Controller Manager * Controller Manager
Each component is being run on the same machines for the following reasons: Each component is being run on the same machine for the following reasons:
* The Scheduler and Controller Manager are tightly coupled with the API Server * The Scheduler and Controller Manager are tightly coupled with the API Server
* Only one Scheduler and Controller Manager can be active at a given time, but it's ok to run multiple at the same time. Each component will elect a leader via the API Server. * Only one Scheduler and Controller Manager can be active at a given time, but it's ok to run multiple at the same time. Each component will elect a leader via the API Server.
@ -52,16 +52,19 @@ sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem /var/lib/kubernetes/
Download the official Kubernetes release binaries: Download the official Kubernetes release binaries:
``` ```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-apiserver wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-apiserver
``` ```
``` ```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-controller-manager wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-controller-manager
``` ```
``` ```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-scheduler wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-scheduler
``` ```
``` ```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kubectl wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kubectl
``` ```
Install the Kubernetes binaries: Install the Kubernetes binaries:
@ -77,7 +80,7 @@ sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
### Kubernetes API Server ### Kubernetes API Server
### Create the systemd unit file #### Create the systemd unit file
Capture the internal IP address: Capture the internal IP address:
@ -130,7 +133,7 @@ ExecStart=/usr/bin/kube-apiserver \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\ --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\ --kubelet-https=true \\
--runtime-config=rbac.authorization.k8s.io/v1alpha1 \\ --runtime-config=rbac.authorization.k8s.io/v1alpha1 \\
--service-account-key-file=/var/lib/kubernetes/kubernetes-key.pem \\ --service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\ --service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\ --service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\ --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
@ -153,7 +156,13 @@ sudo mv kube-apiserver.service /etc/systemd/system/
``` ```
sudo systemctl daemon-reload sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-apiserver sudo systemctl enable kube-apiserver
```
```
sudo systemctl start kube-apiserver sudo systemctl start kube-apiserver
``` ```
@ -199,7 +208,13 @@ sudo mv kube-controller-manager.service /etc/systemd/system/
``` ```
sudo systemctl daemon-reload sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-controller-manager sudo systemctl enable kube-controller-manager
```
```
sudo systemctl start kube-controller-manager sudo systemctl start kube-controller-manager
``` ```
@ -236,7 +251,13 @@ sudo mv kube-scheduler.service /etc/systemd/system/
``` ```
sudo systemctl daemon-reload sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-scheduler sudo systemctl enable kube-scheduler
```
```
sudo systemctl start kube-scheduler sudo systemctl start kube-scheduler
``` ```
@ -244,12 +265,12 @@ sudo systemctl start kube-scheduler
sudo systemctl status kube-scheduler --no-pager sudo systemctl status kube-scheduler --no-pager
``` ```
### Verification ### Verification
``` ```
kubectl get componentstatuses kubectl get componentstatuses
``` ```
``` ```
NAME STATUS MESSAGE ERROR NAME STATUS MESSAGE ERROR
controller-manager Healthy ok controller-manager Healthy ok
@ -268,32 +289,33 @@ The virtual machines created in this tutorial will not have permission to comple
### GCE ### GCE
``` ```
gcloud compute http-health-checks create kube-apiserver-check \ gcloud compute http-health-checks create kube-apiserver-health-check \
--description "Kubernetes API Server Health Check" \ --description "Kubernetes API Server Health Check" \
--port 8080 \ --port 8080 \
--request-path /healthz --request-path /healthz
``` ```
``` ```
gcloud compute target-pools create kubernetes-pool \ gcloud compute target-pools create kubernetes-target-pool \
--http-health-check=kube-apiserver-check --http-health-check=kube-apiserver-health-check
``` ```
``` ```
gcloud compute target-pools add-instances kubernetes-pool \ gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller0,controller1,controller2 --instances controller0,controller1,controller2
``` ```
``` ```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region us-central1 \
--format 'value(address)') --format 'value(address)')
``` ```
``` ```
gcloud compute forwarding-rules create kubernetes-rule \ gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \ --address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \ --ports 6443 \
--target-pool kubernetes-pool \ --target-pool kubernetes-target-pool \
--region us-central1 --region us-central1
``` ```
@ -304,17 +326,3 @@ aws elb register-instances-with-load-balancer \
--load-balancer-name kubernetes \ --load-balancer-name kubernetes \
--instances ${CONTROLLER_0_INSTANCE_ID} ${CONTROLLER_1_INSTANCE_ID} ${CONTROLLER_2_INSTANCE_ID} --instances ${CONTROLLER_0_INSTANCE_ID} ${CONTROLLER_1_INSTANCE_ID} ${CONTROLLER_2_INSTANCE_ID}
``` ```
## RBAC
The following command will grant the `kubelet-bootstrap` user the permissions necessary to request a client TLS certificate.
Bind the `kubelet-bootstrap` user to the `system:node-bootstrapper` cluster role:
```
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
```
At this point kubelets can now request a TLS client certificate as defined in the [kubelet TLS bootstrapping guide](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/).

View File

@ -15,50 +15,53 @@ Kubernetes worker nodes are responsible for running your containers. All Kuberne
Some people would like to run workers and cluster services anywhere in the cluster. This is totally possible, and you'll have to decide what's best for your environment. Some people would like to run workers and cluster services anywhere in the cluster. This is totally possible, and you'll have to decide what's best for your environment.
## Prerequisites
Each worker node will provision a unqiue TLS client certificate as defined in the [kubelet TLS bootstrapping guide](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/). The `kubelet-bootstrap` user must be granted permission to request a client TLS certificate. Run the following command on a controller node to enable TLS bootstrapping:
Bind the `kubelet-bootstrap` user to the `system:node-bootstrapper` cluster role:
```
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
```
## Provision the Kubernetes Worker Nodes ## Provision the Kubernetes Worker Nodes
Run the following commands on `worker0`, `worker1`, `worker2`: Run the following commands on `worker0`, `worker1`, `worker2`:
### Set the Kubernetes Public Address
#### GCE
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \
--region=us-central1 \
--format 'value(address)')
```
#### AWS
```
KUBERNETES_PUBLIC_ADDRESS=$(aws elb describe-load-balancers \
--load-balancer-name kubernetes | \
jq -r '.LoadBalancerDescriptions[].DNSName')
```
---
``` ```
sudo mkdir -p /var/lib/kubelet sudo mkdir -p /var/lib/kubelet
``` ```
``` ```
sudo mv bootstrap.kubeconfig kube-proxy.kubeconfig /var/lib/kubelet sudo mkdir -p /var/lib/kube-proxy
``` ```
#### Move the TLS certificates in place ```
sudo mkdir -p /var/run/kubernetes
```
``` ```
sudo mkdir -p /var/lib/kubernetes sudo mkdir -p /var/lib/kubernetes
``` ```
```
sudo mv bootstrap.kubeconfig /var/lib/kubelet
```
```
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy
```
Move the TLS certificates in place
``` ```
sudo mv ca.pem /var/lib/kubernetes/ sudo mv ca.pem /var/lib/kubernetes/
``` ```
#### Docker ### Install Docker
``` ```
wget https://get.docker.com/builds/Linux/x86_64/docker-1.12.6.tgz wget https://get.docker.com/builds/Linux/x86_64/docker-1.12.6.tgz
@ -74,7 +77,6 @@ sudo cp docker/docker* /usr/bin/
Create the Docker systemd unit file: Create the Docker systemd unit file:
``` ```
cat > docker.service <<EOF cat > docker.service <<EOF
[Unit] [Unit]
@ -104,7 +106,13 @@ sudo mv docker.service /etc/systemd/system/docker.service
``` ```
sudo systemctl daemon-reload sudo systemctl daemon-reload
```
```
sudo systemctl enable docker sudo systemctl enable docker
```
```
sudo systemctl start docker sudo systemctl start docker
``` ```
@ -112,10 +120,9 @@ sudo systemctl start docker
sudo docker version sudo docker version
``` ```
### Install the kubelet
#### kubelet The Kubelet can now use [CNI - the Container Network Interface](https://github.com/containernetworking/cni) to manage machine level networking requirements.
The Kubernetes kubelet no longer relies on docker networking for pods! The Kubelet can now use [CNI - the Container Network Interface](https://github.com/containernetworking/cni) to manage machine level networking requirements.
Download and install CNI plugins Download and install CNI plugins
@ -131,17 +138,18 @@ wget https://storage.googleapis.com/kubernetes-release/network-plugins/cni-amd64
sudo tar -xvf cni-amd64-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz -C /opt/cni sudo tar -xvf cni-amd64-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz -C /opt/cni
``` ```
Download and install the Kubernetes worker binaries: Download and install the Kubernetes worker binaries:
``` ```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kubectl wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kubectl
``` ```
``` ```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-proxy wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-proxy
``` ```
``` ```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kubelet wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kubelet
``` ```
``` ```
@ -154,6 +162,11 @@ sudo mv kubectl kube-proxy kubelet /usr/bin/
Create the kubelet systemd unit file: Create the kubelet systemd unit file:
```
API_SERVERS=$(sudo cat /var/lib/kubelet/bootstrap.kubeconfig | \
grep server | cut -d ':' -f2,3,4 | tr -d '[:space:]')
```
``` ```
cat > kubelet.service <<EOF cat > kubelet.service <<EOF
[Unit] [Unit]
@ -164,7 +177,7 @@ Requires=docker.service
[Service] [Service]
ExecStart=/usr/bin/kubelet \\ ExecStart=/usr/bin/kubelet \\
--api-servers=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\ --api-servers=${API_SERVERS} \\
--allow-privileged=true \\ --allow-privileged=true \\
--cluster-dns=10.32.0.10 \\ --cluster-dns=10.32.0.10 \\
--cluster-domain=cluster.local \\ --cluster-domain=cluster.local \\
@ -190,12 +203,14 @@ sudo mv kubelet.service /etc/systemd/system/kubelet.service
``` ```
``` ```
sudo chmod +w /var/run/kubernetes sudo systemctl daemon-reload
``` ```
``` ```
sudo systemctl daemon-reload
sudo systemctl enable kubelet sudo systemctl enable kubelet
```
```
sudo systemctl start kubelet sudo systemctl start kubelet
``` ```
@ -205,7 +220,6 @@ sudo systemctl status kubelet --no-pager
#### kube-proxy #### kube-proxy
``` ```
cat > kube-proxy.service <<EOF cat > kube-proxy.service <<EOF
[Unit] [Unit]
@ -216,8 +230,7 @@ Documentation=https://github.com/GoogleCloudPlatform/kubernetes
ExecStart=/usr/bin/kube-proxy \\ ExecStart=/usr/bin/kube-proxy \\
--cluster-cidr=10.200.0.0/16 \\ --cluster-cidr=10.200.0.0/16 \\
--masquerade-all=true \\ --masquerade-all=true \\
--master=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\ --kubeconfig=/var/lib/kube-proxy/kube-proxy.kubeconfig \\
--kubeconfig=/var/lib/kubelet/kube-proxy.kubeconfig \\
--proxy-mode=iptables \\ --proxy-mode=iptables \\
--v=2 --v=2
Restart=on-failure Restart=on-failure
@ -234,7 +247,13 @@ sudo mv kube-proxy.service /etc/systemd/system/kube-proxy.service
``` ```
sudo systemctl daemon-reload sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-proxy sudo systemctl enable kube-proxy
```
```
sudo systemctl start kube-proxy sudo systemctl start kube-proxy
``` ```
@ -260,12 +279,21 @@ List the pending certificate requests:
kubectl get csr kubectl get csr
``` ```
```
NAME AGE REQUESTOR CONDITION
csr-XXXXX 1m kubelet-bootstrap Pending
```
> Use the kubectl describe csr command to view the details of a specific signing request. > Use the kubectl describe csr command to view the details of a specific signing request.
Approve each certificate signing request using the `kubectl certificate approve` command: Approve each certificate signing request using the `kubectl certificate approve` command:
``` ```
kubectl certificate approve <csr-name> kubectl certificate approve csr-XXXXX
```
```
certificatesigningrequest "csr-XXXXX" approved
``` ```
Once all certificate signing requests have been approved all nodes should be registered with the cluster: Once all certificate signing requests have been approved all nodes should be registered with the cluster:
@ -276,7 +304,7 @@ kubectl get nodes
``` ```
NAME STATUS AGE VERSION NAME STATUS AGE VERSION
worker0 Ready 7m v1.6.0-beta.4 worker0 Ready 7m v1.6.0-rc.1
worker1 Ready 5m v1.6.0-beta.4 worker1 Ready 5m v1.6.0-rc.1
worker2 Ready 2m v1.6.0-beta.4 worker2 Ready 2m v1.6.0-rc.1
``` ```