update to Kubernetes 1.6

This commit is contained in:
Kelsey Hightower
2017-03-25 11:41:26 -07:00
parent e9c25522a4
commit f62e9c9777
2 changed files with 109 additions and 73 deletions

View File

@@ -12,11 +12,11 @@ In this lab you will also create a frontend load balancer with a public IP addre
The Kubernetes components that make up the control plane include the following components:
* Kubernetes API Server
* Kubernetes Scheduler
* Kubernetes Controller Manager
* API Server
* Scheduler
* Controller Manager
Each component is being run on the same machines for the following reasons:
Each component is being run on the same machine for the following reasons:
* The Scheduler and Controller Manager are tightly coupled with the API Server
* Only one Scheduler and Controller Manager can be active at a given time, but it's ok to run multiple at the same time. Each component will elect a leader via the API Server.
@@ -52,16 +52,19 @@ sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem /var/lib/kubernetes/
Download the official Kubernetes release binaries:
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-apiserver
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-apiserver
```
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-controller-manager
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-controller-manager
```
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kube-scheduler
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kube-scheduler
```
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-beta.4/bin/linux/amd64/kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.6.0-rc.1/bin/linux/amd64/kubectl
```
Install the Kubernetes binaries:
@@ -77,7 +80,7 @@ sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
### Kubernetes API Server
### Create the systemd unit file
#### Create the systemd unit file
Capture the internal IP address:
@@ -130,7 +133,7 @@ ExecStart=/usr/bin/kube-apiserver \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=rbac.authorization.k8s.io/v1alpha1 \\
--service-account-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
@@ -153,7 +156,13 @@ sudo mv kube-apiserver.service /etc/systemd/system/
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-apiserver
```
```
sudo systemctl start kube-apiserver
```
@@ -199,7 +208,13 @@ sudo mv kube-controller-manager.service /etc/systemd/system/
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-controller-manager
```
```
sudo systemctl start kube-controller-manager
```
@@ -236,7 +251,13 @@ sudo mv kube-scheduler.service /etc/systemd/system/
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-scheduler
```
```
sudo systemctl start kube-scheduler
```
@@ -244,12 +265,12 @@ sudo systemctl start kube-scheduler
sudo systemctl status kube-scheduler --no-pager
```
### Verification
```
kubectl get componentstatuses
```
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
@@ -268,32 +289,33 @@ The virtual machines created in this tutorial will not have permission to comple
### GCE
```
gcloud compute http-health-checks create kube-apiserver-check \
gcloud compute http-health-checks create kube-apiserver-health-check \
--description "Kubernetes API Server Health Check" \
--port 8080 \
--request-path /healthz
```
```
gcloud compute target-pools create kubernetes-pool \
--http-health-check=kube-apiserver-check
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check=kube-apiserver-health-check
```
```
gcloud compute target-pools add-instances kubernetes-pool \
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller0,controller1,controller2
```
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes \
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region us-central1 \
--format 'value(address)')
```
```
gcloud compute forwarding-rules create kubernetes-rule \
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--target-pool kubernetes-pool \
--target-pool kubernetes-target-pool \
--region us-central1
```
@@ -304,17 +326,3 @@ aws elb register-instances-with-load-balancer \
--load-balancer-name kubernetes \
--instances ${CONTROLLER_0_INSTANCE_ID} ${CONTROLLER_1_INSTANCE_ID} ${CONTROLLER_2_INSTANCE_ID}
```
## RBAC
The following command will grant the `kubelet-bootstrap` user the permissions necessary to request a client TLS certificate.
Bind the `kubelet-bootstrap` user to the `system:node-bootstrapper` cluster role:
```
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
```
At this point kubelets can now request a TLS client certificate as defined in the [kubelet TLS bootstrapping guide](https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/).