4 Commits

Author SHA1 Message Date
Kelsey Hightower
8b92b87aa3 Update to Kubernetes 1.21.0 2021-05-01 22:49:21 -07:00
Kelsey Hightower
ca96371e4d Update to Kubernetes 1.18.6 2020-07-18 00:42:18 -07:00
Kelsey Hightower
5c462220b7 Update to Kubernetes 1.15.3 2019-09-15 12:10:26 -07:00
Kelsey Hightower
bf2850974e Update to Kubernetes 1.12.0 and add CoreDNS support 2018-09-30 19:35:05 +00:00
17 changed files with 421 additions and 287 deletions

1
.gitignore vendored
View File

@@ -47,3 +47,4 @@ service-account-key.pem
service-account.csr
service-account.pem
service-account-csr.json
*.swp

3
COPYRIGHT.md Normal file
View File

@@ -0,0 +1,3 @@
# Copyright
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>

View File

@@ -1,11 +1,16 @@
# Kubernetes The Hard Way
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](http://kubernetes.io/docs/getting-started-guides/).
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](https://kubernetes.io/docs/setup).
Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
> The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!
## Copyright
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
## Target Audience
The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.
@@ -14,11 +19,11 @@ The target audience for this tutorial is someone planning to support a productio
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.10.2
* [containerd Container Runtime](https://github.com/containerd/containerd) 1.1.0
* [gVisor](https://github.com/google/gvisor) 08879266fef3a67fac1a77f1ea133c3ac75759dd
* [CNI Container Networking](https://github.com/containernetworking/cni) 0.6.0
* [etcd](https://github.com/coreos/etcd) 3.3.5
* [kubernetes](https://github.com/kubernetes/kubernetes) v1.21.0
* [containerd](https://github.com/containerd/containerd) v1.4.4
* [coredns](https://github.com/coredns/coredns) v1.8.3
* [cni](https://github.com/containernetworking/cni) v0.9.1
* [etcd](https://github.com/etcd-io/etcd) v3.4.15
## Labs

View File

@@ -0,0 +1,180 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: coredns
image: coredns/coredns:1.7.0
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.32.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP

View File

@@ -4,7 +4,7 @@
This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits.
[Estimated cost](https://cloud.google.com/products/calculator/#id=78df6ced-9c50-48f8-a670-bc5003f2ddaa) to run this tutorial: $0.22 per hour ($5.39 per day).
[Estimated cost](https://cloud.google.com/products/calculator#id=873932bc-0840-4176-b0fa-a8cfd4ca61ae) to run this tutorial: $0.23 per hour ($5.50 per day).
> The compute resources required for this tutorial exceed the Google Cloud Platform free tier.
@@ -14,7 +14,7 @@ This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) t
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
Verify the Google Cloud SDK version is 200.0.0 or higher:
Verify the Google Cloud SDK version is 338.0.0 or higher:
```
gcloud version
@@ -30,7 +30,13 @@ If you are using the `gcloud` command-line tool for the first time `init` is the
gcloud init
```
Otherwise set a default compute region:
Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials:
```
gcloud auth login
```
Next set a default compute region and compute zone:
```
gcloud config set compute/region us-west1
@@ -46,12 +52,12 @@ gcloud config set compute/zone us-west1-c
## Running Commands in Parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with `synchronize-panes` enabled to speed up the provisioning process.
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with synchronize-panes enabled to speed up the provisioning process.
> The use of tmux is optional and not required to complete this tutorial.
![tmux screenshot](images/tmux-screenshot.png)
> Enable `synchronize-panes`: `ctrl+b` then `shift :`. Then type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
> Enable synchronize-panes by pressing `ctrl+b` followed by `shift+:`. Next type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
Next: [Installing the Client Tools](02-client-tools.md)

View File

@@ -7,13 +7,13 @@ In this lab you will install the command line utilities required to complete thi
The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
Download and install `cfssl` and `cfssljson` from the [cfssl repository](https://pkg.cfssl.org):
Download and install `cfssl` and `cfssljson`:
### OS X
```
curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/darwin/cfssl
curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/darwin/cfssljson
```
```
@@ -34,25 +34,21 @@ brew install cfssl
```
wget -q --show-progress --https-only --timestamping \
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
```
```
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
chmod +x cfssl cfssljson
```
```
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
```
```
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
sudo mv cfssl cfssljson /usr/local/bin/
```
### Verification
Verify `cfssl` version 1.2.0 or higher is installed:
Verify `cfssl` and `cfssljson` version 1.4.1 or higher is installed:
```
cfssl version
@@ -61,12 +57,17 @@ cfssl version
> output
```
Version: 1.2.0
Revision: dev
Runtime: go1.6
Version: 1.4.1
Runtime: go1.12.12
```
> The cfssljson command line utility does not provide a way to print its version.
```
cfssljson --version
```
```
Version: 1.4.1
Runtime: go1.12.12
```
## Install kubectl
@@ -75,7 +76,7 @@ The `kubectl` command line utility is used to interact with the Kubernetes API S
### OS X
```
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/darwin/amd64/kubectl
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/darwin/amd64/kubectl
```
```
@@ -89,7 +90,7 @@ sudo mv kubectl /usr/local/bin/
### Linux
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
```
```
@@ -102,7 +103,7 @@ sudo mv kubectl /usr/local/bin/
### Verification
Verify `kubectl` version 1.10.2 or higher is installed:
Verify `kubectl` version 1.21.0 or higher is installed:
```
kubectl version --client
@@ -111,7 +112,7 @@ kubectl version --client
> output
```
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
```
Next: [Provisioning Compute Resources](03-compute-resources.md)

View File

@@ -63,9 +63,9 @@ gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
> output
```
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp Fals
```
### Kubernetes Public IP Address
@@ -86,13 +86,13 @@ gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
> output
```
NAME REGION ADDRESS STATUS
kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
kubernetes-the-hard-way XX.XXX.XXX.XXX EXTERNAL us-west1 RESERVED
```
## Compute Instances
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 18.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 20.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
### Kubernetes Controllers
@@ -104,9 +104,9 @@ for i in 0 1 2; do
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1804-lts \
--image-family ubuntu-2004-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--machine-type e2-standard-2 \
--private-network-ip 10.240.0.1${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
@@ -128,9 +128,9 @@ for i in 0 1 2; do
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1804-lts \
--image-family ubuntu-2004-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--machine-type e2-standard-2 \
--metadata pod-cidr=10.200.${i}.0/24 \
--private-network-ip 10.240.0.2${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
@@ -144,24 +144,24 @@ done
List the compute instances in your default compute zone:
```
gcloud compute instances list
gcloud compute instances list --filter="tags.items=kubernetes-the-hard-way"
```
> output
```
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING
controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING
controller-2 us-west1-c n1-standard-1 10.240.0.12 XX.XXX.XXX.XX RUNNING
worker-0 us-west1-c n1-standard-1 10.240.0.20 XXX.XXX.XXX.XX RUNNING
worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX RUNNING
worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller-0 us-west1-c e2-standard-2 10.240.0.10 XX.XX.XX.XXX RUNNING
controller-1 us-west1-c e2-standard-2 10.240.0.11 XX.XXX.XXX.XX RUNNING
controller-2 us-west1-c e2-standard-2 10.240.0.12 XX.XXX.XX.XXX RUNNING
worker-0 us-west1-c e2-standard-2 10.240.0.20 XX.XX.XXX.XXX RUNNING
worker-1 us-west1-c e2-standard-2 10.240.0.21 XX.XX.XX.XXX RUNNING
worker-2 us-west1-c e2-standard-2 10.240.0.22 XX.XXX.XX.XX RUNNING
```
## Configuring SSH Access
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as describe in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as described in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
Test SSH access to the `controller-0` compute instances:
@@ -208,11 +208,8 @@ Waiting for SSH key to propagate.
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
```
Welcome to Ubuntu 18.04 LTS (GNU/Linux 4.15.0-1006-gcp x86_64)
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-1042-gcp x86_64)
...
Last login: Sun May 13 14:34:27 2018 from XX.XXX.XXX.XX
```
Type `exit` at the prompt to exit the `controller-0` compute instance:
@@ -224,7 +221,7 @@ $USER@controller-0:~$ exit
```
logout
Connection to XX.XXX.XXX.XXX closed
Connection to XX.XX.XX.XXX closed
```
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)

View File

@@ -303,6 +303,8 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-har
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
@@ -326,13 +328,15 @@ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
}
```
> The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server) lab.
Results:
```
@@ -342,7 +346,7 @@ kubernetes.pem
## The Service Account Key Pair
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as describe in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
Generate the `service-account` certificate and private key:

View File

@@ -22,6 +22,8 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-har
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
Generate a kubeconfig file for each worker node:
```

View File

@@ -1,6 +1,6 @@
# Bootstrapping the etcd Cluster
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/coreos/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/etcd-io/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
## Prerequisites
@@ -18,19 +18,19 @@ gcloud compute ssh controller-0
### Download and Install the etcd Binaries
Download the official etcd release binaries from the [coreos/etcd](https://github.com/coreos/etcd) GitHub project:
Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project:
```
wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/v3.3.5/etcd-v3.3.5-linux-amd64.tar.gz"
"https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
```
Extract and install the `etcd` server and the `etcdctl` command line utility:
```
{
tar -xvf etcd-v3.3.5-linux-amd64.tar.gz
sudo mv etcd-v3.3.5-linux-amd64/etcd* /usr/local/bin/
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
}
```
@@ -39,6 +39,7 @@ Extract and install the `etcd` server and the `etcdctl` command line utility:
```
{
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chmod 700 /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
}
```
@@ -65,6 +66,7 @@ Description=etcd
Documentation=https://github.com/coreos
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
@@ -118,9 +120,9 @@ sudo ETCDCTL_API=3 etcdctl member list \
> output
```
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379, false
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379, false
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379, false
```
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)

View File

@@ -28,10 +28,10 @@ Download the official Kubernetes release binaries:
```
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl"
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl"
```
Install the Kubernetes binaries:
@@ -62,6 +62,17 @@ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
```
REGION=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/project/attributes/google-compute-default-region)
```
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $REGION \
--format 'value(address)')
```
Create the `kube-apiserver.service` systemd unit file:
```
@@ -82,20 +93,20 @@ ExecStart=/usr/local/bin/kube-apiserver \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--enable-swagger-ui=true \\
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all \\
--runtime-config='api/all=true' \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-account-issuer=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
@@ -127,7 +138,7 @@ Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--bind-address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
@@ -159,7 +170,7 @@ Create the `kube-scheduler.yaml` configuration file:
```
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: componentconfig/v1alpha1
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
@@ -209,6 +220,7 @@ A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-bala
Install a basic web server to handle HTTP health checks:
```
sudo apt-get update
sudo apt-get install -y nginx
```
@@ -246,16 +258,11 @@ sudo systemctl enable nginx
### Verification
```
kubectl get componentstatuses --kubeconfig admin.kubeconfig
kubectl cluster-info --kubeconfig admin.kubeconfig
```
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
Kubernetes control plane is running at https://127.0.0.1:6443
```
Test the nginx HTTP health check proxy:
@@ -266,11 +273,15 @@ curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
```
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Mon, 14 May 2018 13:45:39 GMT
Server: nginx/1.18.0 (Ubuntu)
Date: Sun, 02 May 2021 04:19:29 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
Cache-Control: no-cache, private
X-Content-Type-Options: nosniff
X-Kubernetes-Pf-Flowschema-Uid: c43f32eb-e038-457f-9474-571d43e5c325
X-Kubernetes-Pf-Prioritylevel-Uid: 8ba5908f-5569-4330-80fd-c643e7512366
ok
```
@@ -283,6 +294,8 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
```
gcloud compute ssh controller-0
```
@@ -291,7 +304,7 @@ Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.i
```
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
@@ -319,7 +332,7 @@ Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user
```
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
@@ -339,7 +352,7 @@ EOF
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer.
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
### Provision a Network Load Balancer
@@ -378,6 +391,8 @@ Create the external load balancer network resources:
### Verification
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
Retrieve the `kubernetes-the-hard-way` static IP address:
```
@@ -397,12 +412,12 @@ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
```
{
"major": "1",
"minor": "10",
"gitVersion": "v1.10.2",
"gitCommit": "81753b10df112992bf51bbc2c2f85208aad78335",
"minor": "21",
"gitVersion": "v1.21.0",
"gitCommit": "cb303e613a121a29364f75cc67d3d580833a7479",
"gitTreeState": "clean",
"buildDate": "2018-04-27T09:10:24Z",
"goVersion": "go1.9.3",
"buildDate": "2021-04-08T16:25:06Z",
"goVersion": "go1.16.1",
"compiler": "gc",
"platform": "linux/amd64"
}

View File

@@ -1,6 +1,6 @@
# Bootstrapping the Kubernetes Worker Nodes
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [gVisor](https://github.com/google/gvisor), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
## Prerequisites
@@ -27,18 +27,35 @@ Install the OS dependencies:
> The socat binary enables support for the `kubectl port-forward` command.
### Disable Swap
By default the kubelet will fail to start if [swap](https://help.ubuntu.com/community/SwapFaq) is enabled. It is [recommended](https://github.com/kubernetes/kubernetes/issues/7294) that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.
Verify if swap is enabled:
```
sudo swapon --show
```
If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
```
sudo swapoff -a
```
> To ensure swap remains off after reboot consult your Linux distro documentation.
### Download and Install Worker Binaries
```
wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-incubator/cri-tools/releases/download/v1.0.0-beta.0/crictl-v1.0.0-beta.0-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-the-hard-way/runsc \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
https://github.com/containerd/containerd/releases/download/v1.1.0/containerd-1.1.0.linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubelet
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz \
https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
```
Create the installation directories:
@@ -57,12 +74,14 @@ Install the worker binaries:
```
{
chmod +x kubectl kube-proxy kubelet runc.amd64 runsc
mkdir containerd
tar -xvf crictl-v1.21.0-linux-amd64.tar.gz
tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
sudo tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
sudo mv runc.amd64 runc
sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
sudo tar -xvf crictl-v1.0.0-beta.0-linux-amd64.tar.gz -C /usr/local/bin/
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
sudo tar -xvf containerd-1.1.0.linux-amd64.tar.gz -C /
chmod +x crictl kubectl kube-proxy kubelet runc
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
sudo mv containerd/bin/* /bin/
}
```
@@ -80,7 +99,7 @@ Create the `bridge` network configuration file:
```
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.3.1",
"cniVersion": "0.4.0",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
@@ -102,7 +121,8 @@ Create the `loopback` network configuration file:
```
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.3.1",
"cniVersion": "0.4.0",
"name": "lo",
"type": "loopback"
}
EOF
@@ -125,15 +145,9 @@ cat << EOF | sudo tee /etc/containerd/config.toml
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
[plugins.cri.containerd.untrusted_workload_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runsc"
runtime_root = "/run/containerd/runsc"
EOF
```
> Untrusted workloads will be run using the gVisor (runsc) runtime.
Create the `containerd.service` systemd unit file:
```
@@ -189,12 +203,15 @@ clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
```
> The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`.
Create the `kubelet.service` systemd unit file:
```
@@ -287,10 +304,10 @@ gcloud compute ssh controller-0 \
> output
```
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 20s v1.10.2
worker-1 Ready <none> 20s v1.10.2
worker-2 Ready <none> 20s v1.10.2
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 22s v1.21.0
worker-1 Ready <none> 22s v1.21.0
worker-2 Ready <none> 22s v1.21.0
```
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)

View File

@@ -35,21 +35,17 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
## Verification
Check the health of the remote Kubernetes cluster:
Check the version of the remote Kubernetes cluster:
```
kubectl get componentstatuses
kubectl version
```
> output
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
```
List the nodes in the remote Kubernetes cluster:
@@ -61,10 +57,10 @@ kubectl get nodes
> output
```
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 1m v1.10.2
worker-1 Ready <none> 1m v1.10.2
worker-2 Ready <none> 1m v1.10.2
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 2m35s v1.21.0
worker-1 Ready <none> 2m35s v1.21.0
worker-2 Ready <none> 2m35s v1.21.0
```
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)

View File

@@ -50,8 +50,8 @@ gcloud compute routes list --filter "network: kubernetes-the-hard-way"
```
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
default-route-236a40a8bc992b5b kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
default-route-df77b1e818a56b30 kubernetes-the-hard-way 10.240.0.0/24 1000
default-route-1606ba68df692422 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 0
default-route-615e3652a8b74e4d kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000

View File

@@ -1,22 +1,24 @@
# Deploying the DNS Cluster Add-on
In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery to applications running inside the Kubernetes cluster.
In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery, backed by [CoreDNS](https://coredns.io/), to applications running inside the Kubernetes cluster.
## The DNS Cluster Add-on
Deploy the `kube-dns` cluster add-on:
Deploy the `coredns` cluster add-on:
```
kubectl create -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
```
> output
```
service "kube-dns" created
serviceaccount "kube-dns" created
configmap "kube-dns" created
deployment.extensions "kube-dns" created
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
```
List the pods created by the `kube-dns` deployment:
@@ -28,8 +30,9 @@ kubectl get pods -l k8s-app=kube-dns -n kube-system
> output
```
NAME READY STATUS RESTARTS AGE
kube-dns-3097350089-gq015 3/3 Running 0 20s
NAME READY STATUS RESTARTS AGE
coredns-8494f9c688-hh7r2 1/1 Running 0 10s
coredns-8494f9c688-zqrj2 1/1 Running 0 10s
```
## Verification
@@ -37,7 +40,7 @@ kube-dns-3097350089-gq015 3/3 Running 0 20s
Create a `busybox` deployment:
```
kubectl run busybox --image=busybox --command -- sleep 3600
kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
```
List the pod created by the `busybox` deployment:
@@ -49,8 +52,8 @@ kubectl get pods -l run=busybox
> output
```
NAME READY STATUS RESTARTS AGE
busybox-2125412808-mt2vb 1/1 Running 0 15s
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 3s
```
Retrieve the full name of the `busybox` pod:

View File

@@ -32,18 +32,25 @@ gcloud compute ssh controller-0 \
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a 7b 8e 59 78 0f 59 09 |:v1:key1:{.Yx.Y.|
00000050 e2 6a ce cd f4 b6 4e ec bc 91 aa 87 06 29 39 8d |.j....N......)9.|
00000060 70 e8 5d c4 b1 66 69 49 60 8f c0 cc 55 d3 69 2b |p.]..fiI`...U.i+|
00000070 49 bb 0e 7b 90 10 b0 85 5b b1 e2 c6 33 b6 b7 31 |I..{....[...3..1|
00000080 25 99 a1 60 8f 40 a9 e5 55 8c 0f 26 ae 76 dc 5b |%..`.@..U..&.v.[|
00000090 78 35 f5 3e c1 1e bc 21 bb 30 e2 0c e3 80 1e 33 |x5.>...!.0.....3|
000000a0 90 79 46 6d 23 d8 f9 a2 d7 5d ed 4d 82 2e 9a 5e |.yFm#....].M...^|
000000b0 5d b6 3c 34 37 51 4b 83 de 99 1a ea 0f 2f 7c 9b |].<47QK....../|.|
000000c0 46 15 93 aa ba 72 ba b9 bd e1 a3 c0 45 90 b1 de |F....r......E...|
000000d0 c4 2e c8 d0 94 ec 25 69 7b af 08 34 93 12 3d 1c |......%i{..4..=.|
000000e0 fd 23 9b ba e8 d1 25 56 f4 0a |.#....%V..|
000000ea
00000040 3a 76 31 3a 6b 65 79 31 3a 97 d1 2c cd 89 0d 08 |:v1:key1:..,....|
00000050 29 3c 7d 19 41 cb ea d7 3d 50 45 88 82 a3 1f 11 |)<}.A...=PE.....|
00000060 26 cb 43 2e c8 cf 73 7d 34 7e b1 7f 9f 71 d2 51 |&.C...s}4~...q.Q|
00000070 45 05 16 e9 07 d4 62 af f8 2e 6d 4a cf c8 e8 75 |E.....b...mJ...u|
00000080 6b 75 1e b7 64 db 7d 7f fd f3 96 62 e2 a7 ce 22 |ku..d.}....b..."|
00000090 2b 2a 82 01 c3 f5 83 ae 12 8b d5 1d 2e e6 a9 90 |+*..............|
000000a0 bd f0 23 6c 0c 55 e2 52 18 78 fe bf 6d 76 ea 98 |..#l.U.R.x..mv..|
000000b0 fc 2c 17 36 e3 40 87 15 25 13 be d6 04 88 68 5b |.,.6.@..%.....h[|
000000c0 a4 16 81 f6 8e 3b 10 46 cb 2c ba 21 35 0c 5b 49 |.....;.F.,.!5.[I|
000000d0 e5 27 20 4c b3 8e 6b d0 91 c2 28 f1 cc fa 6a 1b |.' L..k...(...j.|
000000e0 31 19 74 e7 a5 66 6a 99 1c 84 c7 e0 b0 fc 32 86 |1.t..fj.......2.|
000000f0 f3 29 5a a4 1c d5 a4 e3 63 26 90 95 1e 27 d0 14 |.)Z.....c&...'..|
00000100 94 f0 ac 1a cd 0d b9 4b ae 32 02 a0 f8 b7 3f 0b |.......K.2....?.|
00000110 6f ad 1f 4d 15 8a d6 68 95 63 cf 7d 04 9a 52 71 |o..M...h.c.}..Rq|
00000120 75 ff 87 6b c5 42 e1 72 27 b5 e9 1a fe e8 c0 3f |u..k.B.r'......?|
00000130 d9 04 5e eb 5d 43 0d 90 ce fa 04 a8 4a b0 aa 01 |..^.]C......J...|
00000140 cf 6d 5b 80 70 5b 99 3c d6 5c c0 dc d1 f5 52 4a |.m[.p[.<.\....RJ|
00000150 2c 2d 28 5a 63 57 8e 4f df 0a |,-(ZcW.O..|
0000015a
```
The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
@@ -55,20 +62,20 @@ In this section you will verify the ability to create and manage [Deployments](h
Create a deployment for the [nginx](https://nginx.org/en/) web server:
```
kubectl run nginx --image=nginx
kubectl create deployment nginx --image=nginx
```
List the pod created by the `nginx` deployment:
```
kubectl get pods -l run=nginx
kubectl get pods -l app=nginx
```
> output
```
NAME READY STATUS RESTARTS AGE
nginx-65899c769f-xkfcn 1/1 Running 0 15s
NAME READY STATUS RESTARTS AGE
nginx-f89759699-kpn5m 1/1 Running 0 10s
```
### Port Forwarding
@@ -78,7 +85,7 @@ In this section you will verify the ability to access applications remotely usin
Retrieve the full name of the `nginx` pod:
```
POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
```
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
@@ -104,13 +111,13 @@ curl --head http://127.0.0.1:8080
```
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Mon, 14 May 2018 13:59:21 GMT
Server: nginx/1.19.10
Date: Sun, 02 May 2021 05:29:25 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
ETag: "6075b537-264"
Accept-Ranges: bytes
```
@@ -136,7 +143,8 @@ kubectl logs $POD_NAME
> output
```
127.0.0.1 - - [14/May/2018:13:59:21 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.52.1" "-"
...
127.0.0.1 - - [02/May/2021:05:29:25 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-"
```
### Exec
@@ -152,7 +160,7 @@ kubectl exec -ti $POD_NAME -- nginx -v
> output
```
nginx version: nginx/1.13.12
nginx version: nginx/1.19.10
```
## Services
@@ -199,128 +207,14 @@ curl -I http://${EXTERNAL_IP}:${NODE_PORT}
```
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Mon, 14 May 2018 14:01:30 GMT
Server: nginx/1.19.10
Date: Sun, 02 May 2021 05:31:52 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
ETag: "6075b537-264"
Accept-Ranges: bytes
```
## Untrusted Workloads
This section will verify the ability to run untrusted workloads using [gVisor](https://github.com/google/gvisor).
Create the `untrusted` pod:
```
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: untrusted
annotations:
io.kubernetes.cri.untrusted-workload: "true"
spec:
containers:
- name: webserver
image: gcr.io/hightowerlabs/helloworld:2.0.0
EOF
```
### Verification
In this section you will verify the `untrusted` pod is running under gVisor (runsc) by inspecting the assigned worker node.
Verify the `untrusted` pod is running:
```
kubectl get pods -o wide
```
```
NAME READY STATUS RESTARTS AGE IP NODE
busybox-68654f944b-djjjb 1/1 Running 0 5m 10.200.0.2 worker-0
nginx-65899c769f-xkfcn 1/1 Running 0 4m 10.200.1.2 worker-1
untrusted 1/1 Running 0 10s 10.200.0.3 worker-0
```
Get the node name where the `untrusted` pod is running:
```
INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
```
SSH into the worker node:
```
gcloud compute ssh ${INSTANCE_NAME}
```
List the containers running under gVisor:
```
sudo runsc --root /run/containerd/runsc/k8s.io list
```
```
I0514 14:03:56.108368 14988 x:0] ***************************
I0514 14:03:56.108548 14988 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
I0514 14:03:56.108730 14988 x:0] Git Revision: 08879266fef3a67fac1a77f1ea133c3ac75759dd
I0514 14:03:56.108787 14988 x:0] PID: 14988
I0514 14:03:56.108838 14988 x:0] UID: 0, GID: 0
I0514 14:03:56.108877 14988 x:0] Configuration:
I0514 14:03:56.108912 14988 x:0] RootDir: /run/containerd/runsc/k8s.io
I0514 14:03:56.109000 14988 x:0] Platform: ptrace
I0514 14:03:56.109080 14988 x:0] FileAccess: proxy, overlay: false
I0514 14:03:56.109159 14988 x:0] Network: sandbox, logging: false
I0514 14:03:56.109238 14988 x:0] Strace: false, max size: 1024, syscalls: []
I0514 14:03:56.109315 14988 x:0] ***************************
ID PID STATUS BUNDLE CREATED OWNER
3528c6b270c76858e15e10ede61bd1100b77519e7c9972d51b370d6a3c60adbb 14766 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/3528c6b270c76858e15e10ede61bd1100b77519e7c9972d51b370d6a3c60adbb 2018-05-14T14:02:34.302378996Z
7ff747c919c2dcf31e64d7673340885138317c91c7c51ec6302527df680ba981 14716 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/7ff747c919c2dcf31e64d7673340885138317c91c7c51ec6302527df680ba981 2018-05-14T14:02:32.159552044Z
I0514 14:03:56.111287 14988 x:0] Exiting with status: 0
```
Get the ID of the `untrusted` pod:
```
POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
pods --name untrusted -q)
```
Get the ID of the `webserver` container running in the `untrusted` pod:
```
CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
ps -p ${POD_ID} -q)
```
Use the gVisor `runsc` command to display the processes running inside the `webserver` container:
```
sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}
```
> output
```
I0514 14:05:16.499237 15096 x:0] ***************************
I0514 14:05:16.499542 15096 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps 3528c6b270c76858e15e10ede61bd1100b77519e7c9972d51b370d6a3c60adbb]
I0514 14:05:16.499597 15096 x:0] Git Revision: 08879266fef3a67fac1a77f1ea133c3ac75759dd
I0514 14:05:16.499644 15096 x:0] PID: 15096
I0514 14:05:16.499695 15096 x:0] UID: 0, GID: 0
I0514 14:05:16.499734 15096 x:0] Configuration:
I0514 14:05:16.499769 15096 x:0] RootDir: /run/containerd/runsc/k8s.io
I0514 14:05:16.499880 15096 x:0] Platform: ptrace
I0514 14:05:16.499962 15096 x:0] FileAccess: proxy, overlay: false
I0514 14:05:16.500042 15096 x:0] Network: sandbox, logging: false
I0514 14:05:16.500120 15096 x:0] Strace: false, max size: 1024, syscalls: []
I0514 14:05:16.500197 15096 x:0] ***************************
UID PID PPID C STIME TIME CMD
0 1 0 0 14:02 40ms app
I0514 14:05:16.501354 15096 x:0] Exiting with status: 0
```
Next: [Cleaning Up](14-cleanup.md)

View File

@@ -9,7 +9,8 @@ Delete the controller and worker compute instances:
```
gcloud -q compute instances delete \
controller-0 controller-1 controller-2 \
worker-0 worker-1 worker-2
worker-0 worker-1 worker-2 \
--zone $(gcloud config get-value compute/zone)
```
## Networking
@@ -53,3 +54,10 @@ Delete the `kubernetes-the-hard-way` network VPC:
gcloud -q compute networks delete kubernetes-the-hard-way
}
```
Delete the `kubernetes-the-hard-way` compute address:
```
gcloud -q compute addresses delete kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region)
```