6 Commits
1.7.4 ... 1.8.0

Author SHA1 Message Date
Kelsey Hightower
ede3437ee8 update to kubernetes 1.8 2017-10-01 20:37:09 -07:00
Steven Trescinski
7f7fd71874 Fixed '--service-cluster-ip-range' subnet for Controller Manager 2017-10-01 12:11:33 -07:00
Kalimar Maia
51e8709080 Instructions for having a default configuration. 2017-10-01 12:11:05 -07:00
Frank Ederveen
92772d2f69 226: use curl for OSX downloads 2017-10-01 12:10:39 -07:00
Leonardo Faoro
b7550ca7ab remove trailing space 2017-09-04 16:08:43 -07:00
Leonardo Faoro
4441278561 remove trailing-spaces and blank lines 2017-09-04 16:08:43 -07:00
12 changed files with 185 additions and 149 deletions

34
.gitignore vendored Normal file
View File

@@ -0,0 +1,34 @@
admin-csr.json
admin-key.pem
admin.csr
admin.pem
ca-config.json
ca-csr.json
ca-key.pem
ca.csr
ca.pem
encryption-config.yaml
kube-proxy-csr.json
kube-proxy-key.pem
kube-proxy.csr
kube-proxy.kubeconfig
kube-proxy.pem
kubernetes-csr.json
kubernetes-key.pem
kubernetes.csr
kubernetes.pem
worker-0-csr.json
worker-0-key.pem
worker-0.csr
worker-0.kubeconfig
worker-0.pem
worker-1-csr.json
worker-1-key.pem
worker-1.csr
worker-1.kubeconfig
worker-1.pem
worker-2-csr.json
worker-2-key.pem
worker-2.csr
worker-2.kubeconfig
worker-2.pem

View File

@@ -14,10 +14,10 @@ The target audience for this tutorial is someone planning to support a productio
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.7.4
* [CRI-O Container Runtime](https://github.com/kubernetes-incubator/cri-o) v1.0.0-beta.0
* [CNI Container Networking](https://github.com/containernetworking/cni) v0.6.0
* [etcd](https://github.com/coreos/etcd) 3.2.6
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.8.0
* [cri-containerd Container Runtime](https://github.com/kubernetes-incubator/cri-containerd) 1.0.0-alpha.0
* [CNI Container Networking](https://github.com/containernetworking/cni) 0.6.0
* [etcd](https://github.com/coreos/etcd) 3.2.8
## Labs

View File

@@ -14,7 +14,7 @@ This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) t
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
Verify the Google Cloud SDK version is 169.0.0 or higher:
Verify the Google Cloud SDK version is 173.0.0 or higher:
```
gcloud version
@@ -24,7 +24,13 @@ gcloud version
This tutorial assumes a default compute region and zone have been configured.
Set a default compute region:
If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this:
```
gcloud init
```
Otherwise set a default compute region:
```
gcloud config set compute/region us-west1

View File

@@ -12,21 +12,16 @@ Download and install `cfssl` and `cfssljson` from the [cfssl repository](https:/
### OS X
```
wget -q --show-progress --https-only --timestamping \
https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64 \
https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
```
```
chmod +x cfssl_darwin-amd64 cfssljson_darwin-amd64
chmod +x cfssl cfssljson
```
```
sudo mv cfssl_darwin-amd64 /usr/local/bin/cfssl
```
```
sudo mv cfssljson_darwin-amd64 /usr/local/bin/cfssljson
sudo mv cfssl cfssljson /usr/local/bin/
```
### Linux
@@ -74,7 +69,7 @@ The `kubectl` command line utility is used to interact with the Kubernetes API S
### OS X
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/darwin/amd64/kubectl
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/darwin/amd64/kubectl
```
```
@@ -88,7 +83,7 @@ sudo mv kubectl /usr/local/bin/
### Linux
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl
```
```
@@ -101,7 +96,7 @@ sudo mv kubectl /usr/local/bin/
### Verification
Verify `kubectl` version 1.7.4 or higher is installed:
Verify `kubectl` version 1.8.0 or higher is installed:
```
kubectl version --client
@@ -110,7 +105,7 @@ kubectl version --client
> output
```
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
```
Next: [Provisioning Compute Resources](03-compute-resources.md)

View File

@@ -66,7 +66,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-checks
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute firewall-rules list --filter "network kubernetes-the-hard-way"
gcloud compute firewall-rules list --filter "network: kubernetes-the-hard-way"
```
> output
@@ -102,7 +102,7 @@ kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
## Compute Instances
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 16.04, which has good support for the [CRI-O container runtime](https://github.com/kubernetes-incubator/cri-o). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 16.04, which has good support for the [cri-containerd container runtime](https://github.com/kubernetes-incubator/cri-containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
### Kubernetes Controllers
@@ -146,7 +146,7 @@ for i in 0 1 2; do
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,worker
done
done
```
### Verification

View File

@@ -18,17 +18,17 @@ Download the official etcd release binaries from the [coreos/etcd](https://githu
```
wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/v3.2.6/etcd-v3.2.6-linux-amd64.tar.gz"
"https://github.com/coreos/etcd/releases/download/v3.2.8/etcd-v3.2.8-linux-amd64.tar.gz"
```
Extract and install the `etcd` server and the `etcdctl` command line utility:
```
tar -xvf etcd-v3.2.6-linux-amd64.tar.gz
tar -xvf etcd-v3.2.8-linux-amd64.tar.gz
```
```
sudo mv etcd-v3.2.6-linux-amd64/etcd* /usr/local/bin/
sudo mv etcd-v3.2.8-linux-amd64/etcd* /usr/local/bin/
```
### Configure the etcd Server

View File

@@ -18,10 +18,10 @@ Download the official Kubernetes release binaries:
```
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl"
"https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl"
```
Install the Kubernetes binaries:
@@ -61,7 +61,7 @@ Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--admission-control=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--admission-control=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
@@ -79,12 +79,12 @@ ExecStart=/usr/local/bin/kube-apiserver \\
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--insecure-bind-address=0.0.0.0 \\
--insecure-bind-address=127.0.0.1 \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=rbac.authorization.k8s.io/v1alpha1 \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
@@ -118,10 +118,10 @@ ExecStart=/usr/local/bin/kube-controller-manager \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--leader-elect=true \\
--master=http://${INTERNAL_IP}:8080 \\
--master=http://127.0.0.1:8080 \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-cluster-ip-range=10.32.0.0/16 \\
--service-cluster-ip-range=10.32.0.0/24 \\
--v=2
Restart=on-failure
RestartSec=5
@@ -144,7 +144,7 @@ Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--leader-elect=true \\
--master=http://${INTERNAL_IP}:8080 \\
--master=http://127.0.0.1:8080 \\
--v=2
Restart=on-failure
RestartSec=5
@@ -182,15 +182,73 @@ kubectl get componentstatuses
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
## RBAC for Kubelet Authorization
In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
```
gcloud compute ssh controller-0
```
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
```
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
```
The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
```
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
```
## The Kubernetes Frontend Load Balancer
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer.
@@ -200,15 +258,7 @@ In this section you will provision an external load balancer to front the Kubern
Create the external load balancer network resources:
```
gcloud compute http-health-checks create kube-apiserver-health-check \
--description "Kubernetes API Server Health Check" \
--port 8080 \
--request-path /healthz
```
```
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check=kube-apiserver-health-check
gcloud compute target-pools create kubernetes-target-pool
```
```
@@ -235,7 +285,7 @@ gcloud compute forwarding-rules create kubernetes-forwarding-rule \
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_IP_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
@@ -243,7 +293,7 @@ KUBERNETES_PUBLIC_IP_ADDRESS=$(gcloud compute addresses describe kubernetes-the-
Make a HTTP request for the Kubernetes version info:
```
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_IP_ADDRESS}:6443/version
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
```
> output
@@ -251,11 +301,11 @@ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_IP_ADDRESS}:6443/version
```
{
"major": "1",
"minor": "7",
"gitVersion": "v1.7.4",
"gitCommit": "793658f2d7ca7f064d2bdf606519f9fe1229c381",
"minor": "8",
"gitVersion": "v1.8.0",
"gitCommit": "6e937839ac04a38cac63e6a7a306c5d035fe7b0a",
"gitTreeState": "clean",
"buildDate": "2017-08-17T08:30:51Z",
"buildDate": "2017-09-28T22:46:41Z",
"goVersion": "go1.8.3",
"compiler": "gc",
"platform": "linux/amd64"

View File

@@ -1,6 +1,6 @@
# Bootstrapping the Kubernetes Worker Nodes
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [cri-o](https://github.com/kubernetes-incubator/cri-o), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [cri-containerd](https://github.com/kubernetes-incubator/cri-containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
## Prerequisites
@@ -12,45 +12,31 @@ gcloud compute ssh worker-0
## Provisioning a Kubernetes Worker Node
### Install the cri-o OS Dependencies
Add the `alexlarsson/flatpak` [PPA](https://launchpad.net/ubuntu/+ppas) which hosts the `libostree` package:
Install the OS dependencies:
```
sudo add-apt-repository -y ppa:alexlarsson/flatpak
sudo apt-get -y install socat
```
```
sudo apt-get update
```
Install the OS dependencies required by the cri-o container runtime:
```
sudo apt-get install -y socat libgpgme11 libostree-1-1
```
> The socat binary enables support for the `kubectl port-forward` command.
### Download and Install Worker Binaries
```
wget -q --show-progress --https-only --timestamping \
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc4/runc.amd64 \
https://storage.googleapis.com/kubernetes-the-hard-way/crio-amd64-v1.0.0-beta.0.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubelet
https://github.com/kubernetes-incubator/cri-containerd/releases/download/v1.0.0-alpha.0/cri-containerd-1.0.0-alpha.0.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubelet
```
Create the installation directories:
```
sudo mkdir -p \
/etc/containers \
/etc/cni/net.d \
/etc/crio \
/opt/cni/bin \
/usr/local/libexec/crio \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
@@ -64,26 +50,17 @@ sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
```
```
tar -xvf crio-amd64-v1.0.0-beta.0.tar.gz
sudo tar -xvf cri-containerd-1.0.0-alpha.0.tar.gz -C /
```
```
chmod +x kubectl kube-proxy kubelet runc.amd64
chmod +x kubectl kube-proxy kubelet
```
```
sudo mv runc.amd64 /usr/local/bin/runc
sudo mv kubectl kube-proxy kubelet /usr/local/bin/
```
```
sudo mv crio crioctl kpod kubectl kube-proxy kubelet /usr/local/bin/
```
```
sudo mv conmon pause /usr/local/libexec/crio/
```
### Configure CNI Networking
Retrieve the Pod CIDR range for the current compute instance:
@@ -132,33 +109,6 @@ Move the network configuration files to the CNI configuration directory:
sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
```
### Configure the CRI-O Container Runtime
```
sudo mv crio.conf seccomp.json /etc/crio/
```
```
sudo mv policy.json /etc/containers/
```
```
cat > crio.service <<EOF
[Unit]
Description=CRI-O daemon
Documentation=https://github.com/kubernetes-incubator/cri-o
[Service]
ExecStart=/usr/local/bin/crio
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
```
### Configure the Kubelet
```
@@ -180,25 +130,26 @@ cat > kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=crio.service
Requires=crio.service
After=cri-containerd.service
Requires=cri-containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--allow-privileged=true \\
--anonymous-auth=false \\
--authorization-mode=Webhook \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--cluster-dns=10.32.0.10 \\
--cluster-domain=cluster.local \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/crio.sock \\
--enable-custom-metrics \\
--container-runtime-endpoint=unix:///var/run/cri-containerd.sock \\
--image-pull-progress-deadline=2m \\
--image-service-endpoint=unix:///var/run/crio.sock \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--pod-cidr=${POD_CIDR} \\
--register-node=true \\
--require-kubeconfig \\
--runtime-request-timeout=10m \\
--runtime-request-timeout=15m \\
--tls-cert-file=/var/lib/kubelet/${HOSTNAME}.pem \\
--tls-private-key-file=/var/lib/kubelet/${HOSTNAME}-key.pem \\
--v=2
@@ -241,7 +192,7 @@ EOF
### Start the Worker Services
```
sudo mv crio.service kubelet.service kube-proxy.service /etc/systemd/system/
sudo mv kubelet.service kube-proxy.service /etc/systemd/system/
```
```
@@ -249,11 +200,11 @@ sudo systemctl daemon-reload
```
```
sudo systemctl enable crio kubelet kube-proxy
sudo systemctl enable containerd cri-containerd kubelet kube-proxy
```
```
sudo systemctl start crio kubelet kube-proxy
sudo systemctl start containerd cri-containerd kubelet kube-proxy
```
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
@@ -275,10 +226,10 @@ kubectl get nodes
> output
```
NAME STATUS AGE VERSION
worker-0 Ready 5m v1.7.4
worker-1 Ready 3m v1.7.4
worker-2 Ready 7s v1.7.4
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 1m v1.8.0
worker-1 Ready <none> 1m v1.8.0
worker-2 Ready <none> 1m v1.8.0
```
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)

View File

@@ -53,11 +53,11 @@ kubectl get componentstatuses
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
```
List the nodes in the remote Kubernetes cluster:
@@ -69,10 +69,10 @@ kubectl get nodes
> output
```
NAME STATUS AGE VERSION
worker-0 Ready 7m v1.7.4
worker-1 Ready 4m v1.7.4
worker-2 Ready 1m v1.7.4
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 2m v1.8.0
worker-1 Ready <none> 2m v1.8.0
worker-2 Ready <none> 2m v1.8.0
```
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)

View File

@@ -43,7 +43,7 @@ done
List the routes in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute routes list --filter "network kubernetes-the-hard-way"
gcloud compute routes list --filter "network: kubernetes-the-hard-way"
```
> output

View File

@@ -100,13 +100,13 @@ curl --head http://127.0.0.1:8080
```
HTTP/1.1 200 OK
Server: nginx/1.13.3
Date: Thu, 31 Aug 2017 01:58:15 GMT
Server: nginx/1.13.5
Date: Mon, 02 Oct 2017 01:04:20 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 11 Jul 2017 13:06:07 GMT
Last-Modified: Tue, 08 Aug 2017 15:25:00 GMT
Connection: keep-alive
ETag: "5964cd3f-264"
ETag: "5989d7cc-264"
Accept-Ranges: bytes
```
@@ -132,7 +132,7 @@ kubectl logs $POD_NAME
> output
```
127.0.0.1 - - [31/Aug/2017:01:58:15 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"
127.0.0.1 - - [02/Oct/2017:01:04:20 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"
```
### Exec
@@ -148,7 +148,7 @@ kubectl exec -ti $POD_NAME -- nginx -v
> output
```
nginx version: nginx/1.13.3
nginx version: nginx/1.13.5
```
## Services
@@ -195,13 +195,13 @@ curl -I http://${EXTERNAL_IP}:${NODE_PORT}
```
HTTP/1.1 200 OK
Server: nginx/1.13.3
Date: Thu, 31 Aug 2017 02:00:21 GMT
Server: nginx/1.13.5
Date: Mon, 02 Oct 2017 01:06:11 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 11 Jul 2017 13:06:07 GMT
Last-Modified: Tue, 08 Aug 2017 15:25:00 GMT
Connection: keep-alive
ETag: "5964cd3f-264"
ETag: "5989d7cc-264"
Accept-Ranges: bytes
```

View File

@@ -4,7 +4,7 @@ In this labs you will delete the compute resources created during this tutorial.
## Compute Instances
Delete the controller and worker compute instances:
Delete the controller and worker compute instances:
```
gcloud -q compute instances delete \