9 Commits

Author SHA1 Message Date
Kelsey Hightower
bf2850974e Update to Kubernetes 1.12.0 and add CoreDNS support 2018-09-30 19:35:05 +00:00
Kelsey Hightower
b974042d95 Update to Kubernetes 1.10.2 and add gVisor support 2018-05-14 14:06:01 +00:00
Daniel J. Pritchett
4f5cecb5ed Add brief contribution guide 2018-01-30 07:38:21 -08:00
Martin Mosegaard Amdisen
68ebb95898 Fix minor typo in lab 14 2018-01-30 05:45:35 -08:00
Mark Vincze
8db0280ef6 Fix small typo 2018-01-30 05:45:16 -08:00
Andrew Thompson
a5c83f1cee Argument syntax for Filter deprecated.
Operator evaluation is changing for consistency across Google APIs.
2018-01-30 05:44:52 -08:00
githubixx
e2dbdaa66b core workload api incl. Deployment now stable 2018-01-30 05:44:05 -08:00
Andrew Lytvynov
05ca2d04cf Correct kubectl output for kube-dns pod list
https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml will create 1 kube-dns pod without autoscaler.
This may confuse the reader.
2018-01-30 05:42:28 -08:00
Phil Estes
f46642a6ba Update cri-containerd location and release
cri-containerd project is moving as a sub-project of containerd itself;
the GitHub migration is complete. Also, as of 9 Jan, the beta.1 release
of cri-containerd is available so the download link is updated.

Signed-off-by: Phil Estes <estesp@gmail.com>
2018-01-30 05:41:55 -08:00
18 changed files with 952 additions and 326 deletions

15
.gitignore vendored
View File

@@ -2,12 +2,23 @@ admin-csr.json
admin-key.pem
admin.csr
admin.pem
admin.kubeconfig
ca-config.json
ca-csr.json
ca-key.pem
ca.csr
ca.pem
encryption-config.yaml
kube-controller-manager-csr.json
kube-controller-manager-key.pem
kube-controller-manager.csr
kube-controller-manager.kubeconfig
kube-controller-manager.pem
kube-scheduler-csr.json
kube-scheduler-key.pem
kube-scheduler.csr
kube-scheduler.kubeconfig
kube-scheduler.pem
kube-proxy-csr.json
kube-proxy-key.pem
kube-proxy.csr
@@ -32,3 +43,7 @@ worker-2-key.pem
worker-2.csr
worker-2.kubeconfig
worker-2.pem
service-account-key.pem
service-account.csr
service-account.pem
service-account-csr.json

18
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,18 @@
This project is made possible by contributors like YOU! While all contributions are welcomed, please be sure and follow the following suggestions to help your PR get merged.
## License
This project uses an [Apache license](LICENSE). Be sure you're comfortable with the implications of that before working up a patch.
## Review and merge process
Review and merge duties are managed by [@kelseyhightower](https://github.com/kelseyhightower). Expect some burden of proof for demonstrating the marginal value of adding new content to the tutorial.
Here are some examples of the review and justification process:
- [#208](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/208)
- [#282](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/282)
## Notes on minutiae
If you find a bug that breaks the guide, please do submit it. If you are considering a minor copy edit for tone, grammar, or simple inconsistent whitespace, consider the tradeoff between maintainer time and community benefit before investing too much of your time.

View File

@@ -14,10 +14,12 @@ The target audience for this tutorial is someone planning to support a productio
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.9.0
* [cri-containerd Container Runtime](https://github.com/kubernetes-incubator/cri-containerd) 1.0.0-beta.0
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.12.0
* [containerd Container Runtime](https://github.com/containerd/containerd) 1.2.0-rc.0
* [gVisor](https://github.com/google/gvisor) 50c283b9f56bb7200938d9e207355f05f79f0d17
* [CNI Container Networking](https://github.com/containernetworking/cni) 0.6.0
* [etcd](https://github.com/coreos/etcd) 3.2.11
* [etcd](https://github.com/coreos/etcd) v3.3.9
* [CoreDNS](https://github.com/coredns/coredns) v1.2.2
## Labs

View File

@@ -51,7 +51,7 @@ metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-dns

View File

@@ -14,7 +14,7 @@ This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) t
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
Verify the Google Cloud SDK version is 183.0.0 or higher:
Verify the Google Cloud SDK version is 218.0.0 or higher:
```
gcloud version
@@ -44,4 +44,14 @@ gcloud config set compute/zone us-west1-c
> Use the `gcloud compute zones list` command to view additional regions and zones.
## Running Commands in Parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with `synchronize-panes` enabled to speed up the provisioning process.
> The use of tmux is optional and not required to complete this tutorial.
![tmux screenshot](images/tmux-screenshot.png)
> Enable `synchronize-panes`: `ctrl+b` then `shift :`. Then type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
Next: [Installing the Client Tools](02-client-tools.md)

View File

@@ -24,6 +24,12 @@ chmod +x cfssl cfssljson
sudo mv cfssl cfssljson /usr/local/bin/
```
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
```
brew install cfssl
```
### Linux
```
@@ -69,7 +75,7 @@ The `kubectl` command line utility is used to interact with the Kubernetes API S
### OS X
```
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/darwin/amd64/kubectl
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/darwin/amd64/kubectl
```
```
@@ -83,7 +89,7 @@ sudo mv kubectl /usr/local/bin/
### Linux
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl
```
```
@@ -96,7 +102,7 @@ sudo mv kubectl /usr/local/bin/
### Verification
Verify `kubectl` version 1.9.0 or higher is installed:
Verify `kubectl` version 1.12.0 or higher is installed:
```
kubectl version --client
@@ -105,7 +111,7 @@ kubectl version --client
> output
```
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
```
Next: [Provisioning Compute Resources](03-compute-resources.md)

View File

@@ -57,7 +57,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute firewall-rules list --filter "network: kubernetes-the-hard-way"
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
```
> output
@@ -92,7 +92,7 @@ kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
## Compute Instances
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 16.04, which has good support for the [cri-containerd container runtime](https://github.com/kubernetes-incubator/cri-containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 18.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
### Kubernetes Controllers
@@ -104,7 +104,7 @@ for i in 0 1 2; do
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1604-lts \
--image-family ubuntu-1804-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.1${i} \
@@ -128,7 +128,7 @@ for i in 0 1 2; do
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1604-lts \
--image-family ubuntu-1804-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--metadata pod-cidr=10.200.${i}.0/24 \
@@ -159,4 +159,72 @@ worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX
worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING
```
## Configuring SSH Access
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as describe in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
Test SSH access to the `controller-0` compute instances:
```
gcloud compute ssh controller-0
```
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue:
```
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
```
At this point the generated SSH keys will be uploaded and stored in your project:
```
Your identification has been saved in /home/$USER/.ssh/google_compute_engine.
Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub.
The key fingerprint is:
SHA256:nz1i8jHmgQuGt+WscqP5SeIaSy5wyIJeL71MuV+QruE $USER@$HOSTNAME
The key's randomart image is:
+---[RSA 2048]----+
| |
| |
| |
| . |
|o. oS |
|=... .o .o o |
|+.+ =+=.+.X o |
|.+ ==O*B.B = . |
| .+.=EB++ o |
+----[SHA256]-----+
Updating project ssh metadata...-Updated [https://www.googleapis.com/compute/v1/projects/$PROJECT_ID].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
```
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
```
Welcome to Ubuntu 18.04 LTS (GNU/Linux 4.15.0-1006-gcp x86_64)
...
Last login: Sun May 13 14:34:27 2018 from XX.XXX.XXX.XX
```
Type `exit` at the prompt to exit the `controller-0` compute instance:
```
$USER@controller-0:~$ exit
```
> output
```
logout
Connection to XX.XXX.XXX.XXX closed
```
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)

View File

@@ -1,14 +1,16 @@
# Provisioning a CA and Generating TLS Certificates
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kubelet, and kube-proxy.
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
## Certificate Authority
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
Create the CA configuration file:
Generate the CA configuration file, certificate, and private key:
```
{
cat > ca-config.json <<EOF
{
"signing": {
@@ -24,11 +26,7 @@ cat > ca-config.json <<EOF
}
}
EOF
```
Create the CA certificate signing request:
```
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
@@ -47,12 +45,10 @@ cat > ca-csr.json <<EOF
]
}
EOF
```
Generate the CA certificate and private key:
```
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
```
Results:
@@ -68,9 +64,11 @@ In this section you will generate client and server certificates for each Kubern
### The Admin Client Certificate
Create the `admin` client certificate signing request:
Generate the `admin` client certificate and private key:
```
{
cat > admin-csr.json <<EOF
{
"CN": "admin",
@@ -89,17 +87,15 @@ cat > admin-csr.json <<EOF
]
}
EOF
```
Generate the `admin` client certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
}
```
Results:
@@ -163,11 +159,57 @@ worker-2-key.pem
worker-2.pem
```
### The kube-proxy Client Certificate
### The Controller Manager Client Certificate
Create the `kube-proxy` client certificate signing request:
Generate the `kube-controller-manager` client certificate and private key:
```
{
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
```
Results:
```
kube-controller-manager-key.pem
kube-controller-manager.pem
```
### The Kube Proxy Client Certificate
Generate the `kube-proxy` client certificate and private key:
```
{
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
@@ -186,17 +228,15 @@ cat > kube-proxy-csr.json <<EOF
]
}
EOF
```
Generate the `kube-proxy` client certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
}
```
Results:
@@ -206,21 +246,63 @@ kube-proxy-key.pem
kube-proxy.pem
```
### The Scheduler Client Certificate
Generate the `kube-scheduler` client certificate and private key:
```
{
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}
```
Results:
```
kube-scheduler-key.pem
kube-scheduler.pem
```
### The Kubernetes API Server Certificate
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
Retrieve the `kubernetes-the-hard-way` static IP address:
Generate the Kubernetes API Server certificate and private key:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
Create the Kubernetes API Server certificate signing request:
```
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
@@ -239,11 +321,7 @@ cat > kubernetes-csr.json <<EOF
]
}
EOF
```
Generate the Kubernetes API Server certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
@@ -251,6 +329,8 @@ cfssl gencert \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
}
```
Results:
@@ -260,6 +340,52 @@ kubernetes-key.pem
kubernetes.pem
```
## The Service Account Key Pair
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as describe in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
Generate the `service-account` certificate and private key:
```
{
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
}
```
Results:
```
service-account-key.pem
service-account.pem
```
## Distribute the Client and Server Certificates
Copy the appropriate certificates and private keys to each worker instance:
@@ -274,10 +400,11 @@ Copy the appropriate certificates and private keys to each controller instance:
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem ${instance}:~/
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem ${instance}:~/
done
```
> The `kube-proxy` and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)

View File

@@ -4,9 +4,7 @@ In this lab you will generate [Kubernetes configuration files](https://kubernete
## Client Authentication Configs
In this section you will generate kubeconfig files for the `kubelet` and `kube-proxy` clients.
> The `scheduler` and `controller manager` access the Kubernetes API Server locally over an insecure API port which does not require authentication. The Kubernetes API Server's insecure port is only enabled for local access.
In this section you will generate kubeconfig files for the `controller manager`, `kubelet`, `kube-proxy`, and `scheduler` clients and the `admin` user.
### Kubernetes Public IP Address
@@ -62,32 +60,137 @@ worker-2.kubeconfig
Generate a kubeconfig file for the `kube-proxy` service:
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
```
```
kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
```
Results:
```
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kube-proxy.kubeconfig
```
### The kube-controller-manager Kubernetes Configuration File
Generate a kubeconfig file for the `kube-controller-manager` service:
```
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
```
Results:
```
kube-controller-manager.kubeconfig
```
### The kube-scheduler Kubernetes Configuration File
Generate a kubeconfig file for the `kube-scheduler` service:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
```
Results:
```
kube-scheduler.kubeconfig
```
### The admin Kubernetes Configuration File
Generate a kubeconfig file for the `admin` user:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
}
```
Results:
```
admin.kubeconfig
```
##
## Distribute the Kubernetes Configuration Files
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
@@ -98,4 +201,12 @@ for instance in worker-0 worker-1 worker-2; do
done
```
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done
```
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)

View File

@@ -10,6 +10,10 @@ The commands in this lab must be run on each controller instance: `controller-0`
gcloud compute ssh controller-0
```
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
## Bootstrapping an etcd Cluster Member
### Download and Install the etcd Binaries
@@ -18,27 +22,25 @@ Download the official etcd release binaries from the [coreos/etcd](https://githu
```
wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/v3.2.11/etcd-v3.2.11-linux-amd64.tar.gz"
"https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"
```
Extract and install the `etcd` server and the `etcdctl` command line utility:
```
tar -xvf etcd-v3.2.11-linux-amd64.tar.gz
```
```
sudo mv etcd-v3.2.11-linux-amd64/etcd* /usr/local/bin/
{
tar -xvf etcd-v3.3.9-linux-amd64.tar.gz
sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/
}
```
### Configure the etcd Server
```
sudo mkdir -p /etc/etcd /var/lib/etcd
```
```
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
{
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
}
```
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
@@ -57,7 +59,7 @@ ETCD_NAME=$(hostname -s)
Create the `etcd.service` systemd unit file:
```
cat > etcd.service <<EOF
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
@@ -75,7 +77,7 @@ ExecStart=/usr/local/bin/etcd \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,http://127.0.0.1:2379 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
@@ -92,19 +94,11 @@ EOF
### Start the etcd Server
```
sudo mv etcd.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable etcd
```
```
sudo systemctl start etcd
{
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
}
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
@@ -114,7 +108,11 @@ sudo systemctl start etcd
List the etcd cluster members:
```
ETCDCTL_API=3 etcdctl member list
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem
```
> output

View File

@@ -10,41 +10,52 @@ The commands in this lab must be run on each controller instance: `controller-0`
gcloud compute ssh controller-0
```
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
## Provision the Kubernetes Control Plane
Create the Kubernetes configuration directory:
```
sudo mkdir -p /etc/kubernetes/config
```
### Download and Install the Kubernetes Controller Binaries
Download the official Kubernetes release binaries:
```
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl"
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl"
```
Install the Kubernetes binaries:
```
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
```
```
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
{
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
}
```
### Configure the Kubernetes API Server
```
sudo mkdir -p /var/lib/kubernetes/
{
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
```
```
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem encryption-config.yaml /var/lib/kubernetes/
```
The instance internal IP address will be used advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
@@ -54,14 +65,13 @@ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
Create the `kube-apiserver.service` systemd unit file:
```
cat > kube-apiserver.service <<EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--admission-control=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
@@ -72,6 +82,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
@@ -79,16 +90,14 @@ ExecStart=/usr/local/bin/kube-apiserver \\
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--insecure-bind-address=127.0.0.1 \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-ca-file=/var/lib/kubernetes/ca.pem \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
@@ -102,10 +111,16 @@ EOF
### Configure the Kubernetes Controller Manager
Move the `kube-controller-manager` kubeconfig into place:
```
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
```
Create the `kube-controller-manager.service` systemd unit file:
```
cat > kube-controller-manager.service <<EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
@@ -117,11 +132,12 @@ ExecStart=/usr/local/bin/kube-controller-manager \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--master=http://127.0.0.1:8080 \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
@@ -133,18 +149,36 @@ EOF
### Configure the Kubernetes Scheduler
Move the `kube-scheduler` kubeconfig into place:
```
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
```
Create the `kube-scheduler.yaml` configuration file:
```
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: componentconfig/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
```
Create the `kube-scheduler.service` systemd unit file:
```
cat > kube-scheduler.service <<EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--leader-elect=true \\
--master=http://127.0.0.1:8080 \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
@@ -157,27 +191,62 @@ EOF
### Start the Controller Services
```
sudo mv kube-apiserver.service kube-scheduler.service kube-controller-manager.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
```
```
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
{
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
}
```
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
### Enable HTTP Health Checks
A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
> The `/healthz` API server endpoint does not require authentication by default.
Install a basic web server to handle HTTP health checks:
```
sudo apt-get install -y nginx
```
```
cat > kubernetes.default.svc.cluster.local <<EOF
server {
listen 80;
server_name kubernetes.default.svc.cluster.local;
location /healthz {
proxy_pass https://127.0.0.1:6443/healthz;
proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
}
}
EOF
```
```
{
sudo mv kubernetes.default.svc.cluster.local \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
}
```
```
sudo systemctl restart nginx
```
```
sudo systemctl enable nginx
```
### Verification
```
kubectl get componentstatuses
kubectl get componentstatuses --kubeconfig admin.kubeconfig
```
```
@@ -189,6 +258,23 @@ etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
```
Test the nginx HTTP health check proxy:
```
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
```
```
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Sun, 30 Sep 2018 17:44:24 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
ok
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
## RBAC for Kubelet Authorization
@@ -204,7 +290,7 @@ gcloud compute ssh controller-0
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
```
cat <<EOF | kubectl apply -f -
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
@@ -232,7 +318,7 @@ The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
```
cat <<EOF | kubectl apply -f -
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
@@ -255,29 +341,39 @@ In this section you will provision an external load balancer to front the Kubern
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
### Provision a Network Load Balancer
Create the external load balancer network resources:
```
gcloud compute target-pools create kubernetes-target-pool
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
```
gcloud compute http-health-checks create kubernetes \
--description "Kubernetes Health Check" \
--host "kubernetes.default.svc.cluster.local" \
--request-path "/healthz"
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(name)')
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
--allow tcp
```
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
}
```
### Verification
@@ -301,12 +397,12 @@ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
```
{
"major": "1",
"minor": "9",
"gitVersion": "v1.9.0",
"gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05",
"minor": "12",
"gitVersion": "v1.12.0",
"gitCommit": "0ed33881dc4355495f623c6f22e7dd0b7632b7c0",
"gitTreeState": "clean",
"buildDate": "2017-12-15T20:55:30Z",
"goVersion": "go1.9.2",
"buildDate": "2018-09-27T16:55:41Z",
"goVersion": "go1.10.4",
"compiler": "gc",
"platform": "linux/amd64"
}

View File

@@ -1,6 +1,6 @@
# Bootstrapping the Kubernetes Worker Nodes
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [cri-containerd](https://github.com/kubernetes-incubator/cri-containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [gVisor](https://github.com/google/gvisor), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
## Prerequisites
@@ -10,12 +10,19 @@ The commands in this lab must be run on each worker instance: `worker-0`, `worke
gcloud compute ssh worker-0
```
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
## Provisioning a Kubernetes Worker Node
Install the OS dependencies:
```
sudo apt-get -y install socat
{
sudo apt-get update
sudo apt-get -y install socat conntrack ipset
}
```
> The socat binary enables support for the `kubectl port-forward` command.
@@ -24,11 +31,14 @@ sudo apt-get -y install socat
```
wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
https://github.com/kubernetes-incubator/cri-containerd/releases/download/v1.0.0-beta.0/cri-containerd-1.0.0-beta.0.linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubelet
https://github.com/containerd/containerd/releases/download/v1.2.0-rc.0/containerd-1.2.0-rc.0.linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet
```
Create the installation directories:
@@ -46,19 +56,15 @@ sudo mkdir -p \
Install the worker binaries:
```
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
```
```
sudo tar -xvf cri-containerd-1.0.0-beta.0.linux-amd64.tar.gz -C /
```
```
chmod +x kubectl kube-proxy kubelet
```
```
sudo mv kubectl kube-proxy kubelet /usr/local/bin/
{
sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc
sudo mv runc.amd64 runc
chmod +x kubectl kube-proxy kubelet runc runsc
sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C /
}
```
### Configure CNI Networking
@@ -73,7 +79,7 @@ POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
Create the `bridge` network configuration file:
```
cat > 10-bridge.conf <<EOF
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.3.1",
"name": "bridge",
@@ -95,7 +101,7 @@ EOF
Create the `loopback` network configuration file:
```
cat > 99-loopback.conf <<EOF
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.3.1",
"type": "loopback"
@@ -103,55 +109,119 @@ cat > 99-loopback.conf <<EOF
EOF
```
Move the network configuration files to the CNI configuration directory:
### Configure containerd
Create the `containerd` configuration file:
```
sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
sudo mkdir -p /etc/containerd/
```
```
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
[plugins.cri.containerd.untrusted_workload_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runsc"
runtime_root = "/run/containerd/runsc"
[plugins.cri.containerd.gvisor]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runsc"
runtime_root = "/run/containerd/runsc"
EOF
```
> Untrusted workloads will be run using the gVisor (runsc) runtime.
Create the `containerd.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
```
### Configure the Kubelet
```
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
{
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.pem /var/lib/kubernetes/
}
```
```
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
```
Create the `kubelet-config.yaml` configuration file:
```
sudo mv ca.pem /var/lib/kubernetes/
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
```
> The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`.
Create the `kubelet.service` systemd unit file:
```
cat > kubelet.service <<EOF
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=cri-containerd.service
Requires=cri-containerd.service
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--allow-privileged=true \\
--anonymous-auth=false \\
--authorization-mode=Webhook \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--cloud-provider= \\
--cluster-dns=10.32.0.10 \\
--cluster-domain=cluster.local \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/cri-containerd.sock \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--pod-cidr=${POD_CIDR} \\
--register-node=true \\
--runtime-request-timeout=15m \\
--tls-cert-file=/var/lib/kubelet/${HOSTNAME}.pem \\
--tls-private-key-file=/var/lib/kubelet/${HOSTNAME}-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
@@ -167,20 +237,30 @@ EOF
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
```
Create the `kube-proxy-config.yaml` configuration file:
```
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
```
Create the `kube-proxy.service` systemd unit file:
```
cat > kube-proxy.service <<EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--cluster-cidr=10.200.0.0/16 \\
--kubeconfig=/var/lib/kube-proxy/kubeconfig \\
--proxy-mode=iptables \\
--v=2
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
@@ -192,44 +272,33 @@ EOF
### Start the Worker Services
```
sudo mv kubelet.service kube-proxy.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable containerd cri-containerd kubelet kube-proxy
```
```
sudo systemctl start containerd cri-containerd kubelet kube-proxy
{
sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy
}
```
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
## Verification
Login to one of the controller nodes:
```
gcloud compute ssh controller-0
```
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
List the registered Kubernetes nodes:
```
kubectl get nodes
gcloud compute ssh controller-0 \
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
```
> output
```
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 18s v1.9.0
worker-1 Ready <none> 18s v1.9.0
worker-2 Ready <none> 18s v1.9.0
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 35s v1.12.0
worker-1 Ready <none> 36s v1.12.0
worker-2 Ready <none> 36s v1.12.0
```
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)

View File

@@ -8,37 +8,29 @@ In this lab you will generate a kubeconfig file for the `kubectl` command line u
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
Generate a kubeconfig file suitable for authenticating as the `admin` user:
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
```
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
```
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
```
kubectl config use-context kubernetes-the-hard-way
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
}
```
## Verification
@@ -52,12 +44,12 @@ kubectl get componentstatuses
> output
```
NAME STATUS MESSAGE ERROR
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
```
List the nodes in the remote Kubernetes cluster:
@@ -69,10 +61,10 @@ kubectl get nodes
> output
```
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 1m v1.9.0
worker-1 Ready <none> 1m v1.9.0
worker-2 Ready <none> 1m v1.9.0
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 117s v1.12.0
worker-1 Ready <none> 118s v1.12.0
worker-2 Ready <none> 118s v1.12.0
```
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)

View File

@@ -50,8 +50,8 @@ gcloud compute routes list --filter "network: kubernetes-the-hard-way"
```
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
default-route-236a40a8bc992b5b kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
default-route-df77b1e818a56b30 kubernetes-the-hard-way 10.240.0.0/24 1000
default-route-081879136902de56 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000
default-route-55199a5aa126d7aa kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000

View File

@@ -1,22 +1,24 @@
# Deploying the DNS Cluster Add-on
In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery to applications running inside the Kubernetes cluster.
In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery, backed by [CoreDNS](https://coredns.io/), to applications running inside the Kubernetes cluster.
## The DNS Cluster Add-on
Deploy the `kube-dns` cluster add-on:
Deploy the `coredns` cluster add-on:
```
kubectl create -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
```
> output
```
serviceaccount "kube-dns" created
configmap "kube-dns" created
service "kube-dns" created
deployment "kube-dns" created
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created
```
List the pods created by the `kube-dns` deployment:
@@ -28,9 +30,9 @@ kubectl get pods -l k8s-app=kube-dns -n kube-system
> output
```
NAME READY STATUS RESTARTS AGE
kube-dns-3097350089-gq015 3/3 Running 0 20s
kube-dns-3097350089-q64qc 3/3 Running 0 20s
NAME READY STATUS RESTARTS AGE
coredns-699f8ddd77-94qv9 1/1 Running 0 20s
coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
```
## Verification
@@ -38,7 +40,7 @@ kube-dns-3097350089-q64qc 3/3 Running 0 20s
Create a `busybox` deployment:
```
kubectl run busybox --image=busybox --command -- sleep 3600
kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
```
List the pod created by the `busybox` deployment:
@@ -50,8 +52,8 @@ kubectl get pods -l run=busybox
> output
```
NAME READY STATUS RESTARTS AGE
busybox-2125412808-mt2vb 1/1 Running 0 15s
NAME READY STATUS RESTARTS AGE
busybox-bd8fb7cbd-vflm9 1/1 Running 0 10s
```
Retrieve the full name of the `busybox` pod:

View File

@@ -17,7 +17,12 @@ Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
```
gcloud compute ssh controller-0 \
--command "ETCDCTL_API=3 etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
--command "sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem\
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
```
> output
@@ -27,17 +32,17 @@ gcloud compute ssh controller-0 \
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a ea 7c 76 32 43 62 6f |:v1:key1:.|v2Cbo|
00000050 44 02 02 8c b7 ca fe 95 a5 33 f6 a1 18 6c 3d 53 |D........3...l=S|
00000060 e7 9c 51 ee 32 f6 e4 17 ea bb 11 d5 2f e2 40 00 |..Q.2......./.@.|
00000070 ae cf d9 e7 ba 7f 68 18 d3 c1 10 10 93 43 35 bd |......h......C5.|
00000080 24 dd 66 b4 f8 f9 82 77 4a d5 78 03 19 41 1e bc |$.f....wJ.x..A..|
00000090 94 3f 17 41 ad cc 8c ba 9f 8f 8e 56 97 7e 96 fb |.?.A.......V.~..|
000000a0 8f 2e 6a a5 bf 08 1f 0b c3 4b 2b 93 d1 ec f8 70 |..j......K+....p|
000000b0 c1 e4 1d 1a d2 0d f8 74 3a a1 4f 3c e0 c9 6d 3f |.......t:.O<..m?|
000000c0 de a3 f5 fd 76 aa 5e bc 27 d9 3c 6b 8f 54 97 45 |....v.^.'.<k.T.E|
000000d0 31 25 ff 23 90 a4 2a f2 db 78 b1 3b ca 21 f3 6b |1%.#..*..x.;.!.k|
000000e0 dd fb 8e 53 c6 23 0d 35 c8 0a |...S.#.5..|
00000040 3a 76 31 3a 6b 65 79 31 3a dd 3f 36 6c ce 65 9d |:v1:key1:.?6l.e.|
00000050 b3 b1 46 1a ba ae a2 1f e4 fa 13 0c 4b 6e 2c 3c |..F.........Kn,<|
00000060 15 fa 88 56 84 b7 aa c0 7a ca 66 f3 de db 2b a3 |...V....z.f...+.|
00000070 88 dc b1 b1 d8 2f 16 3e 6b 4a cb ac 88 5d 23 2d |...../.>kJ...]#-|
00000080 99 62 be 72 9f a5 01 38 15 c4 43 ac 38 5f ef 88 |.b.r...8..C.8_..|
00000090 3b 88 c1 e6 b6 06 4f ae a8 6b c8 40 70 ac 0a d3 |;.....O..k.@p...|
000000a0 3e dc 2b b6 0f 01 b6 8b e2 21 29 4d 32 d6 67 a6 |>.+......!)M2.g.|
000000b0 4e 6d bb 61 0d 85 22 ea f4 d6 2d 0a af 3c 71 85 |Nm.a.."...-..<q.|
000000c0 96 27 c9 ec 90 e3 56 8c 94 a7 1c 9a 0e 00 28 11 |.'....V.......(.|
000000d0 18 28 f4 33 42 d9 57 d9 e3 e9 1c 38 e3 bc 1e c3 |.(.3B.W....8....|
000000e0 d2 47 f3 20 60 be b8 57 a7 0a |.G. `..W..|
000000ea
```
@@ -62,8 +67,8 @@ kubectl get pods -l run=nginx
> output
```
NAME READY STATUS RESTARTS AGE
nginx-4217019353-b5gzn 1/1 Running 0 15s
NAME READY STATUS RESTARTS AGE
nginx-dbddb74b8-6lxg2 1/1 Running 0 10s
```
### Port Forwarding
@@ -99,13 +104,13 @@ curl --head http://127.0.0.1:8080
```
HTTP/1.1 200 OK
Server: nginx/1.13.7
Date: Mon, 18 Dec 2017 14:50:36 GMT
Server: nginx/1.15.4
Date: Sun, 30 Sep 2018 19:23:10 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT
Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT
Connection: keep-alive
ETag: "5a1437f4-264"
ETag: "5baa4e63-264"
Accept-Ranges: bytes
```
@@ -131,7 +136,7 @@ kubectl logs $POD_NAME
> output
```
127.0.0.1 - - [18/Dec/2017:14:50:36 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"
127.0.0.1 - - [30/Sep/2018:19:23:10 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-"
```
### Exec
@@ -147,7 +152,7 @@ kubectl exec -ti $POD_NAME -- nginx -v
> output
```
nginx version: nginx/1.13.7
nginx version: nginx/1.15.4
```
## Services
@@ -194,14 +199,128 @@ curl -I http://${EXTERNAL_IP}:${NODE_PORT}
```
HTTP/1.1 200 OK
Server: nginx/1.13.7
Date: Mon, 18 Dec 2017 14:52:09 GMT
Server: nginx/1.15.4
Date: Sun, 30 Sep 2018 19:25:40 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT
Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT
Connection: keep-alive
ETag: "5a1437f4-264"
ETag: "5baa4e63-264"
Accept-Ranges: bytes
```
## Untrusted Workloads
This section will verify the ability to run untrusted workloads using [gVisor](https://github.com/google/gvisor).
Create the `untrusted` pod:
```
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: untrusted
annotations:
io.kubernetes.cri.untrusted-workload: "true"
spec:
containers:
- name: webserver
image: gcr.io/hightowerlabs/helloworld:2.0.0
EOF
```
### Verification
In this section you will verify the `untrusted` pod is running under gVisor (runsc) by inspecting the assigned worker node.
Verify the `untrusted` pod is running:
```
kubectl get pods -o wide
```
```
NAME READY STATUS RESTARTS AGE IP NODE
busybox-68654f944b-djjjb 1/1 Running 0 5m 10.200.0.2 worker-0
nginx-65899c769f-xkfcn 1/1 Running 0 4m 10.200.1.2 worker-1
untrusted 1/1 Running 0 10s 10.200.0.3 worker-0
```
Get the node name where the `untrusted` pod is running:
```
INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
```
SSH into the worker node:
```
gcloud compute ssh ${INSTANCE_NAME}
```
List the containers running under gVisor:
```
sudo runsc --root /run/containerd/runsc/k8s.io list
```
```
I0930 19:27:13.255142 20832 x:0] ***************************
I0930 19:27:13.255326 20832 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
I0930 19:27:13.255386 20832 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
I0930 19:27:13.255429 20832 x:0] PID: 20832
I0930 19:27:13.255472 20832 x:0] UID: 0, GID: 0
I0930 19:27:13.255591 20832 x:0] Configuration:
I0930 19:27:13.255654 20832 x:0] RootDir: /run/containerd/runsc/k8s.io
I0930 19:27:13.255781 20832 x:0] Platform: ptrace
I0930 19:27:13.255893 20832 x:0] FileAccess: exclusive, overlay: false
I0930 19:27:13.256004 20832 x:0] Network: sandbox, logging: false
I0930 19:27:13.256128 20832 x:0] Strace: false, max size: 1024, syscalls: []
I0930 19:27:13.256238 20832 x:0] ***************************
ID PID STATUS BUNDLE CREATED OWNER
79e74d0cec52a1ff4bc2c9b0bb9662f73ea918959c08bca5bcf07ddb6cb0e1fd 20449 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/79e74d0cec52a1ff4bc2c9b0bb9662f73ea918959c08bca5bcf07ddb6cb0e1fd 0001-01-01T00:00:00Z
af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5 20510 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5 0001-01-01T00:00:00Z
I0930 19:27:13.259733 20832 x:0] Exiting with status: 0
```
Get the ID of the `untrusted` pod:
```
POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
pods --name untrusted -q)
```
Get the ID of the `webserver` container running in the `untrusted` pod:
```
CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
ps -p ${POD_ID} -q)
```
Use the gVisor `runsc` command to display the processes running inside the `webserver` container:
```
sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}
```
> output
```
I0930 19:31:31.419765 21217 x:0] ***************************
I0930 19:31:31.419907 21217 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5]
I0930 19:31:31.419959 21217 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
I0930 19:31:31.420000 21217 x:0] PID: 21217
I0930 19:31:31.420041 21217 x:0] UID: 0, GID: 0
I0930 19:31:31.420081 21217 x:0] Configuration:
I0930 19:31:31.420115 21217 x:0] RootDir: /run/containerd/runsc/k8s.io
I0930 19:31:31.420188 21217 x:0] Platform: ptrace
I0930 19:31:31.420266 21217 x:0] FileAccess: exclusive, overlay: false
I0930 19:31:31.420424 21217 x:0] Network: sandbox, logging: false
I0930 19:31:31.420515 21217 x:0] Strace: false, max size: 1024, syscalls: []
I0930 19:31:31.420676 21217 x:0] ***************************
UID PID PPID C STIME TIME CMD
0 1 0 0 19:26 10ms app
I0930 19:31:31.422022 21217 x:0] Exiting with status: 0
```
Next: [Cleaning Up](14-cleanup.md)

View File

@@ -1,6 +1,6 @@
# Cleaning Up
In this labs you will delete the compute resources created during this tutorial.
In this lab you will delete the compute resources created during this tutorial.
## Compute Instances
@@ -17,18 +17,16 @@ gcloud -q compute instances delete \
Delete the external load balancer network resources:
```
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region)
```
{
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region)
```
gcloud -q compute target-pools delete kubernetes-target-pool
```
gcloud -q compute target-pools delete kubernetes-target-pool
Delete the `kubernetes-the-hard-way` static IP address:
gcloud -q compute http-health-checks delete kubernetes
```
gcloud -q compute addresses delete kubernetes-the-hard-way
gcloud -q compute addresses delete kubernetes-the-hard-way
}
```
Delete the `kubernetes-the-hard-way` firewall rules:
@@ -37,26 +35,21 @@ Delete the `kubernetes-the-hard-way` firewall rules:
gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \
kubernetes-the-hard-way-allow-external
```
Delete the Pod network routes:
```
gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
```
Delete the `kubernetes` subnet:
```
gcloud -q compute networks subnets delete kubernetes
kubernetes-the-hard-way-allow-external \
kubernetes-the-hard-way-allow-health-check
```
Delete the `kubernetes-the-hard-way` network VPC:
```
gcloud -q compute networks delete kubernetes-the-hard-way
{
gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
gcloud -q compute networks subnets delete kubernetes
gcloud -q compute networks delete kubernetes-the-hard-way
}
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB