update to Kubernetes 1.6

pull/137/head
Kelsey Hightower 2017-03-25 11:41:26 -07:00
parent e9c25522a4
commit f62e9c9777
2 changed files with 109 additions and 73 deletions

View File

@ -12,11 +12,11 @@ In this lab you will also create a frontend load balancer with a public IP addre
The Kubernetes components that make up the control plane include the following components:
* Kubernetes API Server
* Kubernetes Scheduler
* Kubernetes Controller Manager
* API Server
* Scheduler
* Controller Manager
Each component is being run on the same machines for the following reasons:
Each component is being run on the same machine for the following reasons:
* The Scheduler and Controller Manager are tightly coupled with the API Server
* Only one Scheduler and Controller Manager can be active at a given time, but it's ok to run multiple at the same time. Each component will elect a leader via the API Server.
@ -52,16 +52,19 @@ sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem /var/lib/kubernetes/
Download the official Kubernetes release binaries:
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-apiserver
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-apiserver
```
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-controller-manager
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-controller-manager
```
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-scheduler
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-scheduler
```
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kubectl
```
Install the Kubernetes binaries:
@ -77,7 +80,7 @@ sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
### Kubernetes API Server
### Create the systemd unit file
#### Create the systemd unit file
Capture the internal IP address:
@ -130,7 +133,7 @@ ExecStart=/usr/bin/kube-apiserver \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=rbac.authorization.k8s.io/v1alpha1 \\
--service-account-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
@ -153,7 +156,13 @@ sudo mv kube-apiserver.service /etc/systemd/system/
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-apiserver
```
```
sudo systemctl start kube-apiserver
```
@ -199,7 +208,13 @@ sudo mv kube-controller-manager.service /etc/systemd/system/
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-controller-manager
```
```
sudo systemctl start kube-controller-manager
```
@ -236,7 +251,13 @@ sudo mv kube-scheduler.service /etc/systemd/system/
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-scheduler
```
```
sudo systemctl start kube-scheduler
```
@ -244,12 +265,12 @@ sudo systemctl start kube-scheduler
sudo systemctl status kube-scheduler --no-pager
```
### Verification
```
kubectl get componentstatuses
```
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
@ -268,32 +289,33 @@ The virtual machines created in this tutorial will not have permission to comple
### GCE
```
gcloud compute http-health-checks create kube-apiserver-check \
gcloud compute http-health-checks create kube-apiserver-health-check \
--description "Kubernetes API Server Health Check" \
--port 8080 \
--request-path /healthz
```
```
gcloud compute target-pools create kubernetes-pool \
--http-health-check=kube-apiserver-check
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check=kube-apiserver-health-check
```
```
gcloud compute target-pools add-instances kubernetes-pool \
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller0,controller1,controller2
```
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region us-central1 \
--format 'value(address)')
```
```
gcloud compute forwarding-rules create kubernetes-rule \
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--target-pool kubernetes-pool \
--target-pool kubernetes-target-pool \
--region us-central1
```
@ -304,17 +326,3 @@ aws elb register-instances-with-load-balancer \
--load-balancer-name kubernetes \
--instances ${CONTROLLER_0_INSTANCE_ID} ${CONTROLLER_1_INSTANCE_ID} ${CONTROLLER_2_INSTANCE_ID}
```
## RBAC
The following command will grant the `kubelet-bootstrap` user the permissions necessary to request a client TLS certificate.
Bind the `kubelet-bootstrap` user to the `system:node-bootstrapper` cluster role:
```
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
```
At this point kubelets can now request a TLS client certificate as defined in the [kubelet TLS bootstrapping guide](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/).

View File

@ -15,50 +15,53 @@ Kubernetes worker nodes are responsible for running your containers. All Kuberne
Some people would like to run workers and cluster services anywhere in the cluster. This is totally possible, and you'll have to decide what's best for your environment.
## Prerequisites
Each worker node will provision a unqiue TLS client certificate as defined in the [kubelet TLS bootstrapping guide](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/). The `kubelet-bootstrap` user must be granted permission to request a client TLS certificate. Run the following command on a controller node to enable TLS bootstrapping:
Bind the `kubelet-bootstrap` user to the `system:node-bootstrapper` cluster role:
```
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
```
## Provision the Kubernetes Worker Nodes
Run the following commands on `worker0`, `worker1`, `worker2`:
### Set the Kubernetes Public Address
#### GCE
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \
--region=us-central1 \
--format 'value(address)')
```
#### AWS
```
KUBERNETES_PUBLIC_ADDRESS=$(aws elb describe-load-balancers \
--load-balancer-name kubernetes | \
jq -r '.LoadBalancerDescriptions[].DNSName')
```
---
```
sudo mkdir -p /var/lib/kubelet
```
```
sudo mv bootstrap.kubeconfig kube-proxy.kubeconfig /var/lib/kubelet
sudo mkdir -p /var/lib/kube-proxy
```
#### Move the TLS certificates in place
```
sudo mkdir -p /var/run/kubernetes
```
```
sudo mkdir -p /var/lib/kubernetes
```
```
sudo mv bootstrap.kubeconfig /var/lib/kubelet
```
```
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy
```
Move the TLS certificates in place
```
sudo mv ca.pem /var/lib/kubernetes/
```
#### Docker
### Install Docker
```
wget https://get.docker.com/builds/Linux/x86_64/docker-1.12.6.tgz
@ -74,7 +77,6 @@ sudo cp docker/docker* /usr/bin/
Create the Docker systemd unit file:
```
cat > docker.service <<EOF
[Unit]
@ -104,7 +106,13 @@ sudo mv docker.service /etc/systemd/system/docker.service
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable docker
```
```
sudo systemctl start docker
```
@ -112,10 +120,9 @@ sudo systemctl start docker
sudo docker version
```
### Install the kubelet
#### kubelet
The Kubernetes kubelet no longer relies on docker networking for pods! The Kubelet can now use [CNI - the Container Network Interface](https://github.com/containernetworking/cni) to manage machine level networking requirements.
The Kubelet can now use [CNI - the Container Network Interface](https://github.com/containernetworking/cni) to manage machine level networking requirements.
Download and install CNI plugins
@ -131,17 +138,18 @@ wget https://storage.googleapis.com/kubernetes-release/network-plugins/cni-amd64
sudo tar -xvf cni-amd64-0799f5732f2a11b329d9e3d51b9c8f2e3759f2ff.tar.gz -C /opt/cni
```
Download and install the Kubernetes worker binaries:
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kubectl
```
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-proxy
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-proxy
```
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kubelet
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kubelet
```
```
@ -154,6 +162,11 @@ sudo mv kubectl kube-proxy kubelet /usr/bin/
Create the kubelet systemd unit file:
```
API_SERVERS=$(sudo cat /var/lib/kubelet/bootstrap.kubeconfig | \
grep server | cut -d ':' -f2,3,4 | tr -d '[:space:]')
```
```
cat > kubelet.service <<EOF
[Unit]
@ -164,7 +177,7 @@ Requires=docker.service
[Service]
ExecStart=/usr/bin/kubelet \\
--api-servers=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
--api-servers=${API_SERVERS} \\
--allow-privileged=true \\
--cluster-dns=10.32.0.10 \\
--cluster-domain=cluster.local \\
@ -190,12 +203,14 @@ sudo mv kubelet.service /etc/systemd/system/kubelet.service
```
```
sudo chmod +w /var/run/kubernetes
sudo systemctl daemon-reload
```
```
sudo systemctl daemon-reload
sudo systemctl enable kubelet
```
```
sudo systemctl start kubelet
```
@ -205,7 +220,6 @@ sudo systemctl status kubelet --no-pager
#### kube-proxy
```
cat > kube-proxy.service <<EOF
[Unit]
@ -216,8 +230,7 @@ Documentation=https://github.com/GoogleCloudPlatform/kubernetes
ExecStart=/usr/bin/kube-proxy \\
--cluster-cidr=10.200.0.0/16 \\
--masquerade-all=true \\
--master=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
--kubeconfig=/var/lib/kubelet/kube-proxy.kubeconfig \\
--kubeconfig=/var/lib/kube-proxy/kube-proxy.kubeconfig \\
--proxy-mode=iptables \\
--v=2
Restart=on-failure
@ -234,7 +247,13 @@ sudo mv kube-proxy.service /etc/systemd/system/kube-proxy.service
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-proxy
```
```
sudo systemctl start kube-proxy
```
@ -260,12 +279,21 @@ List the pending certificate requests:
kubectl get csr
```
```
NAME AGE REQUESTOR CONDITION
csr-XXXXX 1m kubelet-bootstrap Pending
```
> Use the kubectl describe csr command to view the details of a specific signing request.
Approve each certificate signing request using the `kubectl certificate approve` command:
```
kubectl certificate approve <csr-name>
kubectl certificate approve csr-XXXXX
```
```
certificatesigningrequest "csr-XXXXX" approved
```
Once all certificate signing requests have been approved all nodes should be registered with the cluster:
@ -276,7 +304,7 @@ kubectl get nodes
```
NAME STATUS AGE VERSION
worker0 Ready 7m v1.6.0-beta.4
worker1 Ready 5m v1.6.0-beta.4
worker2 Ready 2m v1.6.0-beta.4
worker0 Ready 7m v1.6.0-rc.1
worker1 Ready 5m v1.6.0-rc.1
worker2 Ready 2m v1.6.0-rc.1
```