Update to Kubernetes v1.14.4 with necessary fixes

pull/473/head
Bright Zheng 2019-08-03 12:03:53 +08:00
parent bf2850974e
commit 50163592a9
15 changed files with 329 additions and 271 deletions

View File

@ -14,12 +14,15 @@ The target audience for this tutorial is someone planning to support a productio
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication. Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.12.0 * [Kubernetes](https://github.com/kubernetes/kubernetes) v1.14.4
* [containerd Container Runtime](https://github.com/containerd/containerd) 1.2.0-rc.0 * [containerd Container Runtime](https://github.com/containerd/containerd) 1.2.7
* [gVisor](https://github.com/google/gvisor) 50c283b9f56bb7200938d9e207355f05f79f0d17 * [gVisor](https://github.com/google/gvisor) release-20190529.1
* [CNI Container Networking](https://github.com/containernetworking/cni) 0.6.0 * [CNI Container Networking](https://github.com/containernetworking/cni) 0.7.1
* [etcd](https://github.com/coreos/etcd) v3.3.9 * [etcd](https://github.com/coreos/etcd) v3.3.13
* [CoreDNS](https://github.com/coredns/coredns) v1.2.2 * [CoreDNS](https://github.com/coredns/coredns) v1.5.2
* [cri-tools](https://github.com/kubernetes-sigs/cri-tools) v1.15.0
* [runc](https://github.com/opencontainers/runc) v1.0.0-rc8
* [CNI plugins](https://github.com/containernetworking/plugins) v0.8.1
## Labs ## Labs

View File

@ -16,29 +16,38 @@ Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to in
Verify the Google Cloud SDK version is 218.0.0 or higher: Verify the Google Cloud SDK version is 218.0.0 or higher:
``` ```sh
gcloud version gcloud version
``` ```
> output
```
Google Cloud SDK 241.0.0
bq 2.0.43
core 2019.04.02
gsutil 4.38
```
### Set a Default Compute Region and Zone ### Set a Default Compute Region and Zone
This tutorial assumes a default compute region and zone have been configured. This tutorial assumes a default compute region and zone have been configured.
If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this: If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this:
``` ```sh
gcloud init gcloud init
``` ```
Otherwise set a default compute region: Otherwise set a default compute region:
``` ```sh
gcloud config set compute/region us-west1 gcloud config set compute/region us-west1
``` ```
Set a default compute zone: Set a default compute zone:
``` ```sh
gcloud config set compute/zone us-west1-c gcloud config set compute/zone us-west1-c
``` ```

View File

@ -11,42 +11,42 @@ Download and install `cfssl` and `cfssljson` from the [cfssl repository](https:/
### OS X ### OS X
``` ```sh
curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64 curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64 curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
``` ```
``` ```sh
chmod +x cfssl cfssljson chmod +x cfssl cfssljson
``` ```
``` ```sh
sudo mv cfssl cfssljson /usr/local/bin/ sudo mv cfssl cfssljson /usr/local/bin/
``` ```
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option: Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
``` ```sh
brew install cfssl brew install cfssl
``` ```
### Linux ### Linux
``` ```sh
wget -q --show-progress --https-only --timestamping \ wget -q --show-progress --https-only --timestamping \
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \ https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
``` ```
``` ```sh
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
``` ```
``` ```sh
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
``` ```
``` ```sh
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
``` ```
@ -54,64 +54,65 @@ sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
Verify `cfssl` version 1.2.0 or higher is installed: Verify `cfssl` version 1.2.0 or higher is installed:
``` ```sh
cfssl version cfssl version
``` ```
> output > output
``` ```
Version: 1.2.0 Version: 1.3.4
Revision: dev Revision: dev
Runtime: go1.6 Runtime: go1.12.7
``` ```
> The cfssljson command line utility does not provide a way to print its version. > The cfssljson command line utility does not provide a way to print its version.
## Install kubectl ## Install kubectl
The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries: The `kubectl` command line utility is used to interact with the Kubernetes API Server.
Download and install latest stable `kubectl` from the official release binaries:
### OS X ### OS X
``` ```sh
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/darwin/amd64/kubectl curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"
``` ```
``` ```sh
chmod +x kubectl chmod +x kubectl
``` ```
``` ```sh
sudo mv kubectl /usr/local/bin/ sudo mv kubectl /usr/local/bin/
``` ```
### Linux ### Linux
``` ```sh
wget https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
``` ```
``` ```sh
chmod +x kubectl chmod +x kubectl
``` ```
``` ```sh
sudo mv kubectl /usr/local/bin/ sudo mv kubectl /usr/local/bin/
``` ```
### Verification ### Verification
Verify `kubectl` version 1.12.0 or higher is installed: Verify `kubectl` version 1.14.0 or higher is installed:
``` ```sh
kubectl version --client kubectl version --client
``` ```
> output > output
``` ```
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
``` ```
Next: [Provisioning Compute Resources](03-compute-resources.md) Next: [Provisioning Compute Resources](03-compute-resources.md)

View File

@ -16,7 +16,7 @@ In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/com
Create the `kubernetes-the-hard-way` custom VPC network: Create the `kubernetes-the-hard-way` custom VPC network:
``` ```sh
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
``` ```
@ -24,7 +24,7 @@ A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets)
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
``` ```sh
gcloud compute networks subnets create kubernetes \ gcloud compute networks subnets create kubernetes \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
--range 10.240.0.0/24 --range 10.240.0.0/24
@ -36,7 +36,7 @@ gcloud compute networks subnets create kubernetes \
Create a firewall rule that allows internal communication across all protocols: Create a firewall rule that allows internal communication across all protocols:
``` ```sh
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
--allow tcp,udp,icmp \ --allow tcp,udp,icmp \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
@ -45,7 +45,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
Create a firewall rule that allows external SSH, ICMP, and HTTPS: Create a firewall rule that allows external SSH, ICMP, and HTTPS:
``` ```sh
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
--allow tcp:22,tcp:6443,icmp \ --allow tcp:22,tcp:6443,icmp \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
@ -56,7 +56,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
List the firewall rules in the `kubernetes-the-hard-way` VPC network: List the firewall rules in the `kubernetes-the-hard-way` VPC network:
``` ```sh
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
``` ```
@ -72,14 +72,14 @@ kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers: Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
``` ```sh
gcloud compute addresses create kubernetes-the-hard-way \ gcloud compute addresses create kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) --region $(gcloud config get-value compute/region)
``` ```
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region: Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
``` ```sh
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
``` ```
@ -98,7 +98,7 @@ The compute instances in this lab will be provisioned using [Ubuntu Server](http
Create three compute instances which will host the Kubernetes control plane: Create three compute instances which will host the Kubernetes control plane:
``` ```sh
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute instances create controller-${i} \ gcloud compute instances create controller-${i} \
--async \ --async \
@ -122,7 +122,7 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste
Create three compute instances which will host the Kubernetes worker nodes: Create three compute instances which will host the Kubernetes worker nodes:
``` ```sh
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute instances create worker-${i} \ gcloud compute instances create worker-${i} \
--async \ --async \
@ -143,7 +143,7 @@ done
List the compute instances in your default compute zone: List the compute instances in your default compute zone:
``` ```sh
gcloud compute instances list gcloud compute instances list
``` ```
@ -165,7 +165,7 @@ SSH will be used to configure the controller and worker instances. When connecti
Test SSH access to the `controller-0` compute instances: Test SSH access to the `controller-0` compute instances:
``` ```sh
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
@ -217,12 +217,12 @@ Last login: Sun May 13 14:34:27 2018 from XX.XXX.XXX.XX
Type `exit` at the prompt to exit the `controller-0` compute instance: Type `exit` at the prompt to exit the `controller-0` compute instance:
``` ```sh
$USER@controller-0:~$ exit $USER@controller-0:~$ exit
``` ```
> output > output
``` ```sh
logout logout
Connection to XX.XXX.XXX.XXX closed Connection to XX.XXX.XXX.XXX closed
``` ```

View File

@ -8,7 +8,7 @@ In this section you will provision a Certificate Authority that can be used to g
Generate the CA configuration file, certificate, and private key: Generate the CA configuration file, certificate, and private key:
``` ```sh
{ {
cat > ca-config.json <<EOF cat > ca-config.json <<EOF
@ -54,7 +54,11 @@ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
Results: Results:
``` ```
$ ls -p | grep -v /
ca-config.json
ca-csr.json
ca-key.pem ca-key.pem
ca.csr
ca.pem ca.pem
``` ```
@ -66,7 +70,7 @@ In this section you will generate client and server certificates for each Kubern
Generate the `admin` client certificate and private key: Generate the `admin` client certificate and private key:
``` ```sh
{ {
cat > admin-csr.json <<EOF cat > admin-csr.json <<EOF
@ -101,19 +105,21 @@ cfssl gencert \
Results: Results:
``` ```
admin-csr.json
admin-key.pem admin-key.pem
admin.csr
admin.pem admin.pem
``` ```
### The Kubelet Client Certificates ### The Kubelet Client Certificates
Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/admin/authorization/node/) called Node Authorizer, that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:<nodeName>`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements. Kubernetes uses a **special-purpose authorization mode** called [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/), that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:<nodeName>`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
Generate a certificate and private key for each Kubernetes worker node: Generate a certificate and private key for each Kubernetes worker node:
``` ```sh
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF cat > ${instance}-csr.json <<EOF
{ {
"CN": "system:node:${instance}", "CN": "system:node:${instance}",
"key": { "key": {
@ -132,30 +138,36 @@ cat > ${instance}-csr.json <<EOF
} }
EOF EOF
EXTERNAL_IP=$(gcloud compute instances describe ${instance} \ EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)') --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
INTERNAL_IP=$(gcloud compute instances describe ${instance} \ INTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].networkIP)') --format 'value(networkInterfaces[0].networkIP)')
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \
-ca-key=ca-key.pem \ -ca-key=ca-key.pem \
-config=ca-config.json \ -config=ca-config.json \
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \ -hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
-profile=kubernetes \ -profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance} ${instance}-csr.json | cfssljson -bare ${instance}
done done
``` ```
Results: Results:
``` ```
worker-0-csr.json
worker-0-key.pem worker-0-key.pem
worker-0.csr
worker-0.pem worker-0.pem
worker-1-csr.json
worker-1-key.pem worker-1-key.pem
worker-1.csr
worker-1.pem worker-1.pem
worker-2-csr.json
worker-2-key.pem worker-2-key.pem
worker-2.csr
worker-2.pem worker-2.pem
``` ```
@ -163,10 +175,10 @@ worker-2.pem
Generate the `kube-controller-manager` client certificate and private key: Generate the `kube-controller-manager` client certificate and private key:
``` ```sh
{ {
cat > kube-controller-manager-csr.json <<EOF cat > kube-controller-manager-csr.json <<EOF
{ {
"CN": "system:kube-controller-manager", "CN": "system:kube-controller-manager",
"key": { "key": {
@ -185,12 +197,12 @@ cat > kube-controller-manager-csr.json <<EOF
} }
EOF EOF
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \
-ca-key=ca-key.pem \ -ca-key=ca-key.pem \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
} }
``` ```
@ -198,7 +210,9 @@ cfssl gencert \
Results: Results:
``` ```
kube-controller-manager-csr.json
kube-controller-manager-key.pem kube-controller-manager-key.pem
kube-controller-manager.csr
kube-controller-manager.pem kube-controller-manager.pem
``` ```
@ -207,10 +221,9 @@ kube-controller-manager.pem
Generate the `kube-proxy` client certificate and private key: Generate the `kube-proxy` client certificate and private key:
``` ```sh
{ {
cat > kube-proxy-csr.json <<EOF
cat > kube-proxy-csr.json <<EOF
{ {
"CN": "system:kube-proxy", "CN": "system:kube-proxy",
"key": { "key": {
@ -229,12 +242,12 @@ cat > kube-proxy-csr.json <<EOF
} }
EOF EOF
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \
-ca-key=ca-key.pem \ -ca-key=ca-key.pem \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy kube-proxy-csr.json | cfssljson -bare kube-proxy
} }
``` ```
@ -242,7 +255,9 @@ cfssl gencert \
Results: Results:
``` ```
kube-proxy-csr.json
kube-proxy-key.pem kube-proxy-key.pem
kube-proxy.csr
kube-proxy.pem kube-proxy.pem
``` ```
@ -250,10 +265,10 @@ kube-proxy.pem
Generate the `kube-scheduler` client certificate and private key: Generate the `kube-scheduler` client certificate and private key:
``` ```sh
{ {
cat > kube-scheduler-csr.json <<EOF cat > kube-scheduler-csr.json <<EOF
{ {
"CN": "system:kube-scheduler", "CN": "system:kube-scheduler",
"key": { "key": {
@ -272,12 +287,12 @@ cat > kube-scheduler-csr.json <<EOF
} }
EOF EOF
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \
-ca-key=ca-key.pem \ -ca-key=ca-key.pem \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler kube-scheduler-csr.json | cfssljson -bare kube-scheduler
} }
``` ```
@ -285,7 +300,9 @@ cfssl gencert \
Results: Results:
``` ```
kube-scheduler-csr.json
kube-scheduler-key.pem kube-scheduler-key.pem
kube-scheduler.csr
kube-scheduler.pem kube-scheduler.pem
``` ```
@ -296,14 +313,14 @@ The `kubernetes-the-hard-way` static IP address will be included in the list of
Generate the Kubernetes API Server certificate and private key: Generate the Kubernetes API Server certificate and private key:
``` ```sh
{ {
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--format 'value(address)') --format 'value(address)')
cat > kubernetes-csr.json <<EOF cat > kubernetes-csr.json <<EOF
{ {
"CN": "kubernetes", "CN": "kubernetes",
"key": { "key": {
@ -322,13 +339,13 @@ cat > kubernetes-csr.json <<EOF
} }
EOF EOF
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \
-ca-key=ca-key.pem \ -ca-key=ca-key.pem \
-config=ca-config.json \ -config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \ -hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
-profile=kubernetes \ -profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
} }
``` ```
@ -336,7 +353,9 @@ cfssl gencert \
Results: Results:
``` ```
kubernetes-csr.json
kubernetes-key.pem kubernetes-key.pem
kubernetes.csr
kubernetes.pem kubernetes.pem
``` ```
@ -346,10 +365,10 @@ The Kubernetes Controller Manager leverages a key pair to generate and sign serv
Generate the `service-account` certificate and private key: Generate the `service-account` certificate and private key:
``` ```sh
{ {
cat > service-account-csr.json <<EOF cat > service-account-csr.json <<EOF
{ {
"CN": "service-accounts", "CN": "service-accounts",
"key": { "key": {
@ -368,12 +387,12 @@ cat > service-account-csr.json <<EOF
} }
EOF EOF
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \
-ca-key=ca-key.pem \ -ca-key=ca-key.pem \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account service-account-csr.json | cfssljson -bare service-account
} }
``` ```
@ -381,24 +400,30 @@ cfssl gencert \
Results: Results:
``` ```
service-account-csr.json
service-account-key.pem service-account-key.pem
service-account.csr
service-account.pem service-account.pem
``` ```
## Distribute the Client and Server Certificates ## Distribute the Client and Server Certificates
Copy the appropriate certificates and private keys to each worker instance: Copy the appropriate certificates and private keys to each worker instance:
- ca.pem
- worker-X.pem & worker-X-key.pem
``` ```sh
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/ gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done done
``` ```
Copy the appropriate certificates and private keys to each controller instance: Copy the appropriate certificates and private keys to each controller instance:
- ca.pem & ca-key.pem
- kubernetes-key.pem & kubernetes.pem
- service-account-key.pem & service-account.pem
``` ```sh
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem ${instance}:~/ service-account-key.pem service-account.pem ${instance}:~/

View File

@ -1,6 +1,6 @@
# Generating Kubernetes Configuration Files for Authentication # Generating Kubernetes Configuration Files for Authentication
In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers. In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as `kubeconfig`s, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
## Client Authentication Configs ## Client Authentication Configs
@ -12,7 +12,7 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
Retrieve the `kubernetes-the-hard-way` static IP address: Retrieve the `kubernetes-the-hard-way` static IP address:
``` ```sh
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--format 'value(address)') --format 'value(address)')
@ -24,7 +24,7 @@ When generating kubeconfig files for Kubelets the client certificate matching th
Generate a kubeconfig file for each worker node: Generate a kubeconfig file for each worker node:
``` ```sh
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
@ -59,8 +59,9 @@ worker-2.kubeconfig
Generate a kubeconfig file for the `kube-proxy` service: Generate a kubeconfig file for the `kube-proxy` service:
``` ```sh
{ {
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--embed-certs=true \ --embed-certs=true \
@ -79,6 +80,7 @@ Generate a kubeconfig file for the `kube-proxy` service:
--kubeconfig=kube-proxy.kubeconfig --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
} }
``` ```
@ -92,8 +94,9 @@ kube-proxy.kubeconfig
Generate a kubeconfig file for the `kube-controller-manager` service: Generate a kubeconfig file for the `kube-controller-manager` service:
``` ```sh
{ {
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--embed-certs=true \ --embed-certs=true \
@ -112,6 +115,7 @@ Generate a kubeconfig file for the `kube-controller-manager` service:
--kubeconfig=kube-controller-manager.kubeconfig --kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
} }
``` ```
@ -126,8 +130,9 @@ kube-controller-manager.kubeconfig
Generate a kubeconfig file for the `kube-scheduler` service: Generate a kubeconfig file for the `kube-scheduler` service:
``` ```sh
{ {
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--embed-certs=true \ --embed-certs=true \
@ -146,6 +151,7 @@ Generate a kubeconfig file for the `kube-scheduler` service:
--kubeconfig=kube-scheduler.kubeconfig --kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
} }
``` ```
@ -159,8 +165,9 @@ kube-scheduler.kubeconfig
Generate a kubeconfig file for the `admin` user: Generate a kubeconfig file for the `admin` user:
``` ```sh
{ {
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--embed-certs=true \ --embed-certs=true \
@ -179,6 +186,7 @@ Generate a kubeconfig file for the `admin` user:
--kubeconfig=admin.kubeconfig --kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig kubectl config use-context default --kubeconfig=admin.kubeconfig
} }
``` ```
@ -188,14 +196,11 @@ Results:
admin.kubeconfig admin.kubeconfig
``` ```
##
## Distribute the Kubernetes Configuration Files ## Distribute the Kubernetes Configuration Files
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance: Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
``` ```sh
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done done
@ -203,7 +208,7 @@ done
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance: Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
``` ```sh
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/ gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done done

View File

@ -16,7 +16,7 @@ ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
Create the `encryption-config.yaml` encryption config file: Create the `encryption-config.yaml` encryption config file:
``` ```sh
cat > encryption-config.yaml <<EOF cat > encryption-config.yaml <<EOF
kind: EncryptionConfig kind: EncryptionConfig
apiVersion: v1 apiVersion: v1
@ -34,7 +34,7 @@ EOF
Copy the `encryption-config.yaml` encryption config file to each controller instance: Copy the `encryption-config.yaml` encryption config file to each controller instance:
``` ```sh
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/ gcloud compute scp encryption-config.yaml ${instance}:~/
done done

View File

@ -6,7 +6,7 @@ Kubernetes components are stateless and store cluster state in [etcd](https://gi
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
``` ```sh
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
@ -20,45 +20,41 @@ gcloud compute ssh controller-0
Download the official etcd release binaries from the [coreos/etcd](https://github.com/coreos/etcd) GitHub project: Download the official etcd release binaries from the [coreos/etcd](https://github.com/coreos/etcd) GitHub project:
``` ```sh
wget -q --show-progress --https-only --timestamping \ wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz" "https://github.com/etcd-io/etcd/releases/download/v3.3.13/etcd-v3.3.13-linux-amd64.tar.gz"
``` ```
Extract and install the `etcd` server and the `etcdctl` command line utility: Extract and install the `etcd` server and the `etcdctl` command line utility:
``` ```sh
{ tar -xvf etcd-v3.3.13-linux-amd64.tar.gz
tar -xvf etcd-v3.3.9-linux-amd64.tar.gz sudo mv etcd-v3.3.13-linux-amd64/etcd* /usr/local/bin/
sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/
}
``` ```
### Configure the etcd Server ### Configure the etcd Server
``` ```sh
{ sudo mkdir -p /etc/etcd /var/lib/etcd
sudo mkdir -p /etc/etcd /var/lib/etcd sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
}
``` ```
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance: The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
``` ```sh
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance: Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
``` ```sh
ETCD_NAME=$(hostname -s) ETCD_NAME=$(hostname -s)
``` ```
Create the `etcd.service` systemd unit file: Create the `etcd.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/etcd.service cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit] [Unit]
Description=etcd Description=etcd
@ -93,7 +89,7 @@ EOF
### Start the etcd Server ### Start the etcd Server
``` ```sh
{ {
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable etcd sudo systemctl enable etcd
@ -107,7 +103,7 @@ EOF
List the etcd cluster members: List the etcd cluster members:
``` ```sh
sudo ETCDCTL_API=3 etcdctl member list \ sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \ --endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \ --cacert=/etc/etcd/ca.pem \

View File

@ -6,7 +6,7 @@ In this lab you will bootstrap the Kubernetes control plane across three compute
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
``` ```sh
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
@ -18,7 +18,7 @@ gcloud compute ssh controller-0
Create the Kubernetes configuration directory: Create the Kubernetes configuration directory:
``` ```sh
sudo mkdir -p /etc/kubernetes/config sudo mkdir -p /etc/kubernetes/config
``` ```
@ -26,17 +26,17 @@ sudo mkdir -p /etc/kubernetes/config
Download the official Kubernetes release binaries: Download the official Kubernetes release binaries:
``` ```sh
wget -q --show-progress --https-only --timestamping \ wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \ "https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \ "https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \ "https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl" "https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kubectl"
``` ```
Install the Kubernetes binaries: Install the Kubernetes binaries:
``` ```sh
{ {
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
@ -45,7 +45,7 @@ Install the Kubernetes binaries:
### Configure the Kubernetes API Server ### Configure the Kubernetes API Server
``` ```sh
{ {
sudo mkdir -p /var/lib/kubernetes/ sudo mkdir -p /var/lib/kubernetes/
@ -57,14 +57,14 @@ Install the Kubernetes binaries:
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance: The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
``` ```sh
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
Create the `kube-apiserver.service` systemd unit file: Create the `kube-apiserver.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit] [Unit]
Description=Kubernetes API Server Description=Kubernetes API Server
@ -82,7 +82,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\
--authorization-mode=Node,RBAC \\ --authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\ --bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\ --client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,PersistentVolumeClaimResize,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \\
--enable-swagger-ui=true \\ --enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\ --etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\ --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
@ -113,13 +113,13 @@ EOF
Move the `kube-controller-manager` kubeconfig into place: Move the `kube-controller-manager` kubeconfig into place:
``` ```sh
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/ sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
``` ```
Create the `kube-controller-manager.service` systemd unit file: Create the `kube-controller-manager.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit] [Unit]
Description=Kubernetes Controller Manager Description=Kubernetes Controller Manager
@ -127,12 +127,17 @@ Documentation=https://github.com/kubernetes/kubernetes
[Service] [Service]
ExecStart=/usr/local/bin/kube-controller-manager \\ ExecStart=/usr/local/bin/kube-controller-manager \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--node-cidr-mask-size=24 \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--address=0.0.0.0 \\ --address=0.0.0.0 \\
--master=127.0.0.1:8080 \\
--cluster-cidr=10.200.0.0/16 \\ --cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\ --cluster-name=kubernetes \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\ --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\ --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\ --leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\ --root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\ --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
@ -151,15 +156,15 @@ EOF
Move the `kube-scheduler` kubeconfig into place: Move the `kube-scheduler` kubeconfig into place:
``` ```sh
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/ sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
``` ```
Create the `kube-scheduler.yaml` configuration file: Create the `kube-scheduler.yaml` configuration file:
``` ```sh
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: componentconfig/v1alpha1 apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration kind: KubeSchedulerConfiguration
clientConnection: clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig" kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
@ -170,7 +175,7 @@ EOF
Create the `kube-scheduler.service` systemd unit file: Create the `kube-scheduler.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit] [Unit]
Description=Kubernetes Scheduler Description=Kubernetes Scheduler
@ -190,7 +195,7 @@ EOF
### Start the Controller Services ### Start the Controller Services
``` ```sh
{ {
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
@ -208,11 +213,11 @@ A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-bala
Install a basic web server to handle HTTP health checks: Install a basic web server to handle HTTP health checks:
``` ```sh
sudo apt-get install -y nginx sudo apt-get install -y nginx
``` ```
``` ```sh
cat > kubernetes.default.svc.cluster.local <<EOF cat > kubernetes.default.svc.cluster.local <<EOF
server { server {
listen 80; listen 80;
@ -226,7 +231,7 @@ server {
EOF EOF
``` ```
``` ```sh
{ {
sudo mv kubernetes.default.svc.cluster.local \ sudo mv kubernetes.default.svc.cluster.local \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-available/kubernetes.default.svc.cluster.local
@ -235,17 +240,14 @@ EOF
} }
``` ```
``` ```sh
sudo systemctl restart nginx sudo systemctl restart nginx
```
```
sudo systemctl enable nginx sudo systemctl enable nginx
``` ```
### Verification ### Verification
``` ```sh
kubectl get componentstatuses --kubeconfig admin.kubeconfig kubectl get componentstatuses --kubeconfig admin.kubeconfig
``` ```
@ -260,7 +262,7 @@ etcd-1 Healthy {"health": "true"}
Test the nginx HTTP health check proxy: Test the nginx HTTP health check proxy:
``` ```sh
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
``` ```
@ -283,13 +285,13 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization. > This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
``` ```sh
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
``` ```sh
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole kind: ClusterRole
@ -317,7 +319,7 @@ The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user: Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
``` ```sh
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding kind: ClusterRoleBinding
@ -346,7 +348,7 @@ In this section you will provision an external load balancer to front the Kubern
Create the external load balancer network resources: Create the external load balancer network resources:
``` ```sh
{ {
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
@ -380,7 +382,7 @@ Create the external load balancer network resources:
Retrieve the `kubernetes-the-hard-way` static IP address: Retrieve the `kubernetes-the-hard-way` static IP address:
``` ```sh
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--format 'value(address)') --format 'value(address)')
@ -388,8 +390,8 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-har
Make a HTTP request for the Kubernetes version info: Make a HTTP request for the Kubernetes version info:
``` ```sh
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version curl --cacert ca.pem "https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version"
``` ```
> output > output
@ -397,12 +399,12 @@ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
``` ```
{ {
"major": "1", "major": "1",
"minor": "12", "minor": "14",
"gitVersion": "v1.12.0", "gitVersion": "v1.14.4",
"gitCommit": "0ed33881dc4355495f623c6f22e7dd0b7632b7c0", "gitCommit": "a87e9a978f65a8303aa9467537aa59c18122cbf9",
"gitTreeState": "clean", "gitTreeState": "clean",
"buildDate": "2018-09-27T16:55:41Z", "buildDate": "2019-07-08T08:43:10Z",
"goVersion": "go1.10.4", "goVersion": "go1.12.5",
"compiler": "gc", "compiler": "gc",
"platform": "linux/amd64" "platform": "linux/amd64"
} }

View File

@ -6,7 +6,7 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
``` ```sh
gcloud compute ssh worker-0 gcloud compute ssh worker-0
``` ```
@ -18,7 +18,7 @@ gcloud compute ssh worker-0
Install the OS dependencies: Install the OS dependencies:
``` ```sh
{ {
sudo apt-get update sudo apt-get update
sudo apt-get -y install socat conntrack ipset sudo apt-get -y install socat conntrack ipset
@ -29,21 +29,21 @@ Install the OS dependencies:
### Download and Install Worker Binaries ### Download and Install Worker Binaries
``` ```sh
wget -q --show-progress --https-only --timestamping \ wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \ https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \ https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \ https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \ https://github.com/containernetworking/plugins/releases/download/v0.8.1/cni-plugins-linux-amd64-v0.8.1.tgz \
https://github.com/containerd/containerd/releases/download/v1.2.0-rc.0/containerd-1.2.0-rc.0.linux-amd64.tar.gz \ https://github.com/containerd/containerd/releases/download/v1.2.7/containerd-1.2.7.linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \ https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \ https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet https://storage.googleapis.com/kubernetes-release/release/v1.14.4/bin/linux/amd64/kubelet
``` ```
Create the installation directories: Create the installation directories:
``` ```sh
sudo mkdir -p \ sudo mkdir -p \
/etc/cni/net.d \ /etc/cni/net.d \
/opt/cni/bin \ /opt/cni/bin \
@ -55,15 +55,15 @@ sudo mkdir -p \
Install the worker binaries: Install the worker binaries:
``` ```sh
{ {
sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc
sudo mv runc.amd64 runc sudo mv runc.amd64 runc
chmod +x kubectl kube-proxy kubelet runc runsc chmod +x kubectl kube-proxy kubelet runc runsc
sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/ sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/ sudo tar -xvf crictl-v1.15.0-linux-amd64.tar.gz -C /usr/local/bin/
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ sudo tar -xvf cni-plugins-linux-amd64-v0.8.1.tgz -C /opt/cni/bin/
sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C / sudo tar -xvf containerd-1.2.7.linux-amd64.tar.gz -C /
} }
``` ```
@ -71,14 +71,15 @@ Install the worker binaries:
Retrieve the Pod CIDR range for the current compute instance: Retrieve the Pod CIDR range for the current compute instance:
``` ```sh
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
echo $POD_CIDR
``` ```
Create the `bridge` network configuration file: Create the `bridge` network configuration file:
``` ```sh
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{ {
"cniVersion": "0.3.1", "cniVersion": "0.3.1",
@ -100,7 +101,7 @@ EOF
Create the `loopback` network configuration file: Create the `loopback` network configuration file:
``` ```sh
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{ {
"cniVersion": "0.3.1", "cniVersion": "0.3.1",
@ -113,11 +114,11 @@ EOF
Create the `containerd` configuration file: Create the `containerd` configuration file:
``` ```sh
sudo mkdir -p /etc/containerd/ sudo mkdir -p /etc/containerd/
``` ```
``` ```sh
cat << EOF | sudo tee /etc/containerd/config.toml cat << EOF | sudo tee /etc/containerd/config.toml
[plugins] [plugins]
[plugins.cri.containerd] [plugins.cri.containerd]
@ -141,7 +142,7 @@ EOF
Create the `containerd.service` systemd unit file: Create the `containerd.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/containerd.service cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit] [Unit]
Description=containerd container runtime Description=containerd container runtime
@ -167,7 +168,7 @@ EOF
### Configure the Kubelet ### Configure the Kubelet
``` ```sh
{ {
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/ sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
@ -177,7 +178,7 @@ EOF
Create the `kubelet-config.yaml` configuration file: Create the `kubelet-config.yaml` configuration file:
``` ```sh
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1 apiVersion: kubelet.config.k8s.io/v1beta1
@ -205,7 +206,7 @@ EOF
Create the `kubelet.service` systemd unit file: Create the `kubelet.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit] [Unit]
Description=Kubernetes Kubelet Description=Kubernetes Kubelet
@ -233,13 +234,13 @@ EOF
### Configure the Kubernetes Proxy ### Configure the Kubernetes Proxy
``` ```sh
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
``` ```
Create the `kube-proxy-config.yaml` configuration file: Create the `kube-proxy-config.yaml` configuration file:
``` ```sh
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1 apiVersion: kubeproxy.config.k8s.io/v1alpha1
@ -252,7 +253,7 @@ EOF
Create the `kube-proxy.service` systemd unit file: Create the `kube-proxy.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit] [Unit]
Description=Kubernetes Kube Proxy Description=Kubernetes Kube Proxy
@ -271,7 +272,7 @@ EOF
### Start the Worker Services ### Start the Worker Services
``` ```sh
{ {
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy sudo systemctl enable containerd kubelet kube-proxy
@ -287,7 +288,7 @@ EOF
List the registered Kubernetes nodes: List the registered Kubernetes nodes:
``` ```sh
gcloud compute ssh controller-0 \ gcloud compute ssh controller-0 \
--command "kubectl get nodes --kubeconfig admin.kubeconfig" --command "kubectl get nodes --kubeconfig admin.kubeconfig"
``` ```
@ -296,9 +297,9 @@ gcloud compute ssh controller-0 \
``` ```
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 35s v1.12.0 worker-0 Ready <none> 94s v1.14.4
worker-1 Ready <none> 36s v1.12.0 worker-1 Ready <none> 93s v1.14.4
worker-2 Ready <none> 36s v1.12.0 worker-2 Ready <none> 92s v1.14.4
``` ```
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md) Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)

View File

@ -10,7 +10,7 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
Generate a kubeconfig file suitable for authenticating as the `admin` user: Generate a kubeconfig file suitable for authenticating as the `admin` user:
``` ```sh
{ {
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
@ -37,7 +37,7 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
Check the health of the remote Kubernetes cluster: Check the health of the remote Kubernetes cluster:
``` ```sh
kubectl get componentstatuses kubectl get componentstatuses
``` ```
@ -54,17 +54,17 @@ etcd-0 Healthy {"health":"true"}
List the nodes in the remote Kubernetes cluster: List the nodes in the remote Kubernetes cluster:
``` ```sh
kubectl get nodes kubectl get nodes
``` ```
> output > output
``` ```
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 117s v1.12.0 worker-0 Ready <none> 3m59s v1.14.4
worker-1 Ready <none> 118s v1.12.0 worker-1 Ready <none> 3m58s v1.14.4
worker-2 Ready <none> 118s v1.12.0 worker-2 Ready <none> 3m57s v1.14.4
``` ```
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md) Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)

View File

@ -12,7 +12,7 @@ In this section you will gather the information required to create routes in the
Print the internal IP address and Pod CIDR range for each worker instance: Print the internal IP address and Pod CIDR range for each worker instance:
``` ```sh
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \ gcloud compute instances describe ${instance} \
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
@ -31,7 +31,7 @@ done
Create network routes for each worker instance: Create network routes for each worker instance:
``` ```sh
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
@ -42,7 +42,7 @@ done
List the routes in the `kubernetes-the-hard-way` VPC network: List the routes in the `kubernetes-the-hard-way` VPC network:
``` ```sh
gcloud compute routes list --filter "network: kubernetes-the-hard-way" gcloud compute routes list --filter "network: kubernetes-the-hard-way"
``` ```

View File

@ -6,7 +6,7 @@ In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts
Deploy the `coredns` cluster add-on: Deploy the `coredns` cluster add-on:
``` ```sh
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
``` ```
@ -23,7 +23,7 @@ service/kube-dns created
List the pods created by the `kube-dns` deployment: List the pods created by the `kube-dns` deployment:
``` ```sh
kubectl get pods -l k8s-app=kube-dns -n kube-system kubectl get pods -l k8s-app=kube-dns -n kube-system
``` ```
@ -39,13 +39,13 @@ coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
Create a `busybox` deployment: Create a `busybox` deployment:
``` ```sh
kubectl run busybox --image=busybox:1.28 --command -- sleep 3600 kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
``` ```
List the pod created by the `busybox` deployment: List the pod created by the `busybox` deployment:
``` ```sh
kubectl get pods -l run=busybox kubectl get pods -l run=busybox
``` ```
@ -64,7 +64,7 @@ POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name
Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod: Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
``` ```sh
kubectl exec -ti $POD_NAME -- nslookup kubernetes kubectl exec -ti $POD_NAME -- nslookup kubernetes
``` ```

View File

@ -8,14 +8,14 @@ In this section you will verify the ability to [encrypt secret data at rest](htt
Create a generic secret: Create a generic secret:
``` ```sh
kubectl create secret generic kubernetes-the-hard-way \ kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata" --from-literal="mykey=mydata"
``` ```
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
``` ```sh
gcloud compute ssh controller-0 \ gcloud compute ssh controller-0 \
--command "sudo ETCDCTL_API=3 etcdctl get \ --command "sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \ --endpoints=https://127.0.0.1:2379 \
@ -54,13 +54,13 @@ In this section you will verify the ability to create and manage [Deployments](h
Create a deployment for the [nginx](https://nginx.org/en/) web server: Create a deployment for the [nginx](https://nginx.org/en/) web server:
``` ```sh
kubectl run nginx --image=nginx kubectl run nginx --image=nginx
``` ```
List the pod created by the `nginx` deployment: List the pod created by the `nginx` deployment:
``` ```sh
kubectl get pods -l run=nginx kubectl get pods -l run=nginx
``` ```
@ -77,13 +77,13 @@ In this section you will verify the ability to access applications remotely usin
Retrieve the full name of the `nginx` pod: Retrieve the full name of the `nginx` pod:
``` ```sh
POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}") POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
``` ```
Forward port `8080` on your local machine to port `80` of the `nginx` pod: Forward port `8080` on your local machine to port `80` of the `nginx` pod:
``` ```sh
kubectl port-forward $POD_NAME 8080:80 kubectl port-forward $POD_NAME 8080:80
``` ```
@ -104,13 +104,13 @@ curl --head http://127.0.0.1:8080
``` ```
HTTP/1.1 200 OK HTTP/1.1 200 OK
Server: nginx/1.15.4 Server: nginx/1.17.2
Date: Sun, 30 Sep 2018 19:23:10 GMT Date: Sat, 03 Aug 2019 03:35:08 GMT
Content-Type: text/html Content-Type: text/html
Content-Length: 612 Content-Length: 612
Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT Last-Modified: Tue, 23 Jul 2019 11:45:37 GMT
Connection: keep-alive Connection: keep-alive
ETag: "5baa4e63-264" ETag: "5d36f361-264"
Accept-Ranges: bytes Accept-Ranges: bytes
``` ```
@ -129,14 +129,14 @@ In this section you will verify the ability to [retrieve container logs](https:/
Print the `nginx` pod logs: Print the `nginx` pod logs:
``` ```sh
kubectl logs $POD_NAME kubectl logs $POD_NAME
``` ```
> output > output
``` ```
127.0.0.1 - - [30/Sep/2018:19:23:10 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-" 127.0.0.1 - - [03/Aug/2019:03:35:08 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"
``` ```
### Exec ### Exec
@ -152,7 +152,7 @@ kubectl exec -ti $POD_NAME -- nginx -v
> output > output
``` ```
nginx version: nginx/1.15.4 nginx version: nginx/1.17.2
``` ```
## Services ## Services
@ -161,7 +161,7 @@ In this section you will verify the ability to expose applications using a [Serv
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service: Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
``` ```sh
kubectl expose deployment nginx --port 80 --type NodePort kubectl expose deployment nginx --port 80 --type NodePort
``` ```
@ -169,14 +169,21 @@ kubectl expose deployment nginx --port 80 --type NodePort
Retrieve the node port assigned to the `nginx` service: Retrieve the node port assigned to the `nginx` service:
``` ```sh
NODE_PORT=$(kubectl get svc nginx \ NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}') --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
echo $NODE_PORT
```
> output
```
30313
``` ```
Create a firewall rule that allows remote access to the `nginx` node port: Create a firewall rule that allows remote access to the `nginx` node port:
``` ```sh
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \ --allow=tcp:${NODE_PORT} \
--network kubernetes-the-hard-way --network kubernetes-the-hard-way
@ -184,28 +191,28 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service
Retrieve the external IP address of a worker instance: Retrieve the external IP address of a worker instance:
``` ```sh
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)') --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
``` ```
Make an HTTP request using the external IP address and the `nginx` node port: Make an HTTP request using the external IP address and the `nginx` node port:
``` ```sh
curl -I http://${EXTERNAL_IP}:${NODE_PORT} curl -I "http://${EXTERNAL_IP}:${NODE_PORT}"
``` ```
> output > output
``` ```
HTTP/1.1 200 OK HTTP/1.1 200 OK
Server: nginx/1.15.4 Server: nginx/1.17.2
Date: Sun, 30 Sep 2018 19:25:40 GMT Date: Sat, 03 Aug 2019 03:43:19 GMT
Content-Type: text/html Content-Type: text/html
Content-Length: 612 Content-Length: 612
Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT Last-Modified: Tue, 23 Jul 2019 11:45:37 GMT
Connection: keep-alive Connection: keep-alive
ETag: "5baa4e63-264" ETag: "5d36f361-264"
Accept-Ranges: bytes Accept-Ranges: bytes
``` ```
@ -215,7 +222,7 @@ This section will verify the ability to run untrusted workloads using [gVisor](h
Create the `untrusted` pod: Create the `untrusted` pod:
``` ```sh
cat <<EOF | kubectl apply -f - cat <<EOF | kubectl apply -f -
apiVersion: v1 apiVersion: v1
kind: Pod kind: Pod
@ -236,34 +243,43 @@ In this section you will verify the `untrusted` pod is running under gVisor (run
Verify the `untrusted` pod is running: Verify the `untrusted` pod is running:
``` ```sh
kubectl get pods -o wide kubectl get pods,svc -o wide
```
```
NAME READY STATUS RESTARTS AGE IP NODE
busybox-68654f944b-djjjb 1/1 Running 0 5m 10.200.0.2 worker-0
nginx-65899c769f-xkfcn 1/1 Running 0 4m 10.200.1.2 worker-1
untrusted 1/1 Running 0 10s 10.200.0.3 worker-0
``` ```
> output
```
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/busybox-68f7d47fc6-fnzlp 1/1 Running 0 18m 10.200.1.2 worker-1 <none> <none>
pod/nginx 1/1 Running 0 11m 10.200.1.3 worker-1 <none> <none>
pod/untrusted 1/1 Running 0 90s 10.200.0.3 worker-0 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.32.0.1 <none> 443/TCP 7h20m <none>
service/nginx NodePort 10.32.0.147 <none> 80:31209/TCP 7m30s run=nginx
```
Get the node name where the `untrusted` pod is running: Get the node name where the `untrusted` pod is running:
``` ```sh
INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}') INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
``` ```
SSH into the worker node: SSH into the worker node:
``` ```sh
gcloud compute ssh ${INSTANCE_NAME} gcloud compute ssh ${INSTANCE_NAME}
``` ```
List the containers running under gVisor: List the containers running under gVisor:
``` ```sh
sudo runsc --root /run/containerd/runsc/k8s.io list sudo runsc --root /run/containerd/runsc/k8s.io list
``` ```
> output
``` ```
I0930 19:27:13.255142 20832 x:0] *************************** I0930 19:27:13.255142 20832 x:0] ***************************
I0930 19:27:13.255326 20832 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list] I0930 19:27:13.255326 20832 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
@ -285,21 +301,21 @@ I0930 19:27:13.259733 20832 x:0] Exiting with status: 0
Get the ID of the `untrusted` pod: Get the ID of the `untrusted` pod:
``` ```sh
POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \ POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
pods --name untrusted -q) pods --name untrusted -q)
``` ```
Get the ID of the `webserver` container running in the `untrusted` pod: Get the ID of the `webserver` container running in the `untrusted` pod:
``` ```sh
CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \ CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
ps -p ${POD_ID} -q) ps -p ${POD_ID} -q)
``` ```
Use the gVisor `runsc` command to display the processes running inside the `webserver` container: Use the gVisor `runsc` command to display the processes running inside the `webserver` container:
``` ```sh
sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID} sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}
``` ```

View File

@ -6,7 +6,7 @@ In this lab you will delete the compute resources created during this tutorial.
Delete the controller and worker compute instances: Delete the controller and worker compute instances:
``` ```sh
gcloud -q compute instances delete \ gcloud -q compute instances delete \
controller-0 controller-1 controller-2 \ controller-0 controller-1 controller-2 \
worker-0 worker-1 worker-2 worker-0 worker-1 worker-2
@ -16,7 +16,7 @@ gcloud -q compute instances delete \
Delete the external load balancer network resources: Delete the external load balancer network resources:
``` ```sh
{ {
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region) --region $(gcloud config get-value compute/region)
@ -31,7 +31,7 @@ Delete the external load balancer network resources:
Delete the `kubernetes-the-hard-way` firewall rules: Delete the `kubernetes-the-hard-way` firewall rules:
``` ```sh
gcloud -q compute firewall-rules delete \ gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \ kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \ kubernetes-the-hard-way-allow-internal \