12 Commits

Author SHA1 Message Date
Kelsey Hightower
8b92b87aa3 Update to Kubernetes 1.21.0 2021-05-01 22:49:21 -07:00
Kelsey Hightower
ca96371e4d Update to Kubernetes 1.18.6 2020-07-18 00:42:18 -07:00
Kelsey Hightower
5c462220b7 Update to Kubernetes 1.15.3 2019-09-15 12:10:26 -07:00
Kelsey Hightower
bf2850974e Update to Kubernetes 1.12.0 and add CoreDNS support 2018-09-30 19:35:05 +00:00
Kelsey Hightower
b974042d95 Update to Kubernetes 1.10.2 and add gVisor support 2018-05-14 14:06:01 +00:00
Daniel J. Pritchett
4f5cecb5ed Add brief contribution guide 2018-01-30 07:38:21 -08:00
Martin Mosegaard Amdisen
68ebb95898 Fix minor typo in lab 14 2018-01-30 05:45:35 -08:00
Mark Vincze
8db0280ef6 Fix small typo 2018-01-30 05:45:16 -08:00
Andrew Thompson
a5c83f1cee Argument syntax for Filter deprecated.
Operator evaluation is changing for consistency across Google APIs.
2018-01-30 05:44:52 -08:00
githubixx
e2dbdaa66b core workload api incl. Deployment now stable 2018-01-30 05:44:05 -08:00
Andrew Lytvynov
05ca2d04cf Correct kubectl output for kube-dns pod list
https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml will create 1 kube-dns pod without autoscaler.
This may confuse the reader.
2018-01-30 05:42:28 -08:00
Phil Estes
f46642a6ba Update cri-containerd location and release
cri-containerd project is moving as a sub-project of containerd itself;
the GitHub migration is complete. Also, as of 9 Jan, the beta.1 release
of cri-containerd is available so the download link is updated.

Signed-off-by: Phil Estes <estesp@gmail.com>
2018-01-30 05:41:55 -08:00
20 changed files with 1140 additions and 392 deletions

16
.gitignore vendored
View File

@@ -2,12 +2,23 @@ admin-csr.json
admin-key.pem
admin.csr
admin.pem
admin.kubeconfig
ca-config.json
ca-csr.json
ca-key.pem
ca.csr
ca.pem
encryption-config.yaml
kube-controller-manager-csr.json
kube-controller-manager-key.pem
kube-controller-manager.csr
kube-controller-manager.kubeconfig
kube-controller-manager.pem
kube-scheduler-csr.json
kube-scheduler-key.pem
kube-scheduler.csr
kube-scheduler.kubeconfig
kube-scheduler.pem
kube-proxy-csr.json
kube-proxy-key.pem
kube-proxy.csr
@@ -32,3 +43,8 @@ worker-2-key.pem
worker-2.csr
worker-2.kubeconfig
worker-2.pem
service-account-key.pem
service-account.csr
service-account.pem
service-account-csr.json
*.swp

18
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,18 @@
This project is made possible by contributors like YOU! While all contributions are welcomed, please be sure and follow the following suggestions to help your PR get merged.
## License
This project uses an [Apache license](LICENSE). Be sure you're comfortable with the implications of that before working up a patch.
## Review and merge process
Review and merge duties are managed by [@kelseyhightower](https://github.com/kelseyhightower). Expect some burden of proof for demonstrating the marginal value of adding new content to the tutorial.
Here are some examples of the review and justification process:
- [#208](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/208)
- [#282](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/282)
## Notes on minutiae
If you find a bug that breaks the guide, please do submit it. If you are considering a minor copy edit for tone, grammar, or simple inconsistent whitespace, consider the tradeoff between maintainer time and community benefit before investing too much of your time.

3
COPYRIGHT.md Normal file
View File

@@ -0,0 +1,3 @@
# Copyright
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>

View File

@@ -1,11 +1,16 @@
# Kubernetes The Hard Way
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](http://kubernetes.io/docs/getting-started-guides/).
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](https://kubernetes.io/docs/setup).
Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
> The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!
## Copyright
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
## Target Audience
The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.
@@ -14,10 +19,11 @@ The target audience for this tutorial is someone planning to support a productio
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.9.0
* [cri-containerd Container Runtime](https://github.com/kubernetes-incubator/cri-containerd) 1.0.0-beta.0
* [CNI Container Networking](https://github.com/containernetworking/cni) 0.6.0
* [etcd](https://github.com/coreos/etcd) 3.2.11
* [kubernetes](https://github.com/kubernetes/kubernetes) v1.21.0
* [containerd](https://github.com/containerd/containerd) v1.4.4
* [coredns](https://github.com/coredns/coredns) v1.8.3
* [cni](https://github.com/containernetworking/cni) v0.9.1
* [etcd](https://github.com/etcd-io/etcd) v3.4.15
## Labs

View File

@@ -0,0 +1,180 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: coredns
image: coredns/coredns:1.7.0
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.32.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP

View File

@@ -51,7 +51,7 @@ metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: kube-dns

View File

@@ -4,7 +4,7 @@
This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits.
[Estimated cost](https://cloud.google.com/products/calculator/#id=78df6ced-9c50-48f8-a670-bc5003f2ddaa) to run this tutorial: $0.22 per hour ($5.39 per day).
[Estimated cost](https://cloud.google.com/products/calculator#id=873932bc-0840-4176-b0fa-a8cfd4ca61ae) to run this tutorial: $0.23 per hour ($5.50 per day).
> The compute resources required for this tutorial exceed the Google Cloud Platform free tier.
@@ -14,7 +14,7 @@ This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) t
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
Verify the Google Cloud SDK version is 183.0.0 or higher:
Verify the Google Cloud SDK version is 338.0.0 or higher:
```
gcloud version
@@ -30,7 +30,13 @@ If you are using the `gcloud` command-line tool for the first time `init` is the
gcloud init
```
Otherwise set a default compute region:
Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials:
```
gcloud auth login
```
Next set a default compute region and compute zone:
```
gcloud config set compute/region us-west1
@@ -44,4 +50,14 @@ gcloud config set compute/zone us-west1-c
> Use the `gcloud compute zones list` command to view additional regions and zones.
## Running Commands in Parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with synchronize-panes enabled to speed up the provisioning process.
> The use of tmux is optional and not required to complete this tutorial.
![tmux screenshot](images/tmux-screenshot.png)
> Enable synchronize-panes by pressing `ctrl+b` followed by `shift+:`. Next type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
Next: [Installing the Client Tools](02-client-tools.md)

View File

@@ -7,13 +7,13 @@ In this lab you will install the command line utilities required to complete thi
The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
Download and install `cfssl` and `cfssljson` from the [cfssl repository](https://pkg.cfssl.org):
Download and install `cfssl` and `cfssljson`:
### OS X
```
curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/darwin/cfssl
curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/darwin/cfssljson
```
```
@@ -24,29 +24,31 @@ chmod +x cfssl cfssljson
sudo mv cfssl cfssljson /usr/local/bin/
```
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
```
brew install cfssl
```
### Linux
```
wget -q --show-progress --https-only --timestamping \
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
```
```
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
chmod +x cfssl cfssljson
```
```
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
```
```
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
sudo mv cfssl cfssljson /usr/local/bin/
```
### Verification
Verify `cfssl` version 1.2.0 or higher is installed:
Verify `cfssl` and `cfssljson` version 1.4.1 or higher is installed:
```
cfssl version
@@ -55,12 +57,17 @@ cfssl version
> output
```
Version: 1.2.0
Revision: dev
Runtime: go1.6
Version: 1.4.1
Runtime: go1.12.12
```
> The cfssljson command line utility does not provide a way to print its version.
```
cfssljson --version
```
```
Version: 1.4.1
Runtime: go1.12.12
```
## Install kubectl
@@ -69,7 +76,7 @@ The `kubectl` command line utility is used to interact with the Kubernetes API S
### OS X
```
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/darwin/amd64/kubectl
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/darwin/amd64/kubectl
```
```
@@ -83,7 +90,7 @@ sudo mv kubectl /usr/local/bin/
### Linux
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
```
```
@@ -96,7 +103,7 @@ sudo mv kubectl /usr/local/bin/
### Verification
Verify `kubectl` version 1.9.0 or higher is installed:
Verify `kubectl` version 1.21.0 or higher is installed:
```
kubectl version --client
@@ -105,7 +112,7 @@ kubectl version --client
> output
```
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
```
Next: [Provisioning Compute Resources](03-compute-resources.md)

View File

@@ -57,15 +57,15 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute firewall-rules list --filter "network: kubernetes-the-hard-way"
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
```
> output
```
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp Fals
```
### Kubernetes Public IP Address
@@ -86,13 +86,13 @@ gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
> output
```
NAME REGION ADDRESS STATUS
kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
kubernetes-the-hard-way XX.XXX.XXX.XXX EXTERNAL us-west1 RESERVED
```
## Compute Instances
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 16.04, which has good support for the [cri-containerd container runtime](https://github.com/kubernetes-incubator/cri-containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 20.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
### Kubernetes Controllers
@@ -104,9 +104,9 @@ for i in 0 1 2; do
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1604-lts \
--image-family ubuntu-2004-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--machine-type e2-standard-2 \
--private-network-ip 10.240.0.1${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
@@ -128,9 +128,9 @@ for i in 0 1 2; do
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1604-lts \
--image-family ubuntu-2004-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--machine-type e2-standard-2 \
--metadata pod-cidr=10.200.${i}.0/24 \
--private-network-ip 10.240.0.2${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
@@ -144,19 +144,84 @@ done
List the compute instances in your default compute zone:
```
gcloud compute instances list
gcloud compute instances list --filter="tags.items=kubernetes-the-hard-way"
```
> output
```
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING
controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING
controller-2 us-west1-c n1-standard-1 10.240.0.12 XX.XXX.XXX.XX RUNNING
worker-0 us-west1-c n1-standard-1 10.240.0.20 XXX.XXX.XXX.XX RUNNING
worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX RUNNING
worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller-0 us-west1-c e2-standard-2 10.240.0.10 XX.XX.XX.XXX RUNNING
controller-1 us-west1-c e2-standard-2 10.240.0.11 XX.XXX.XXX.XX RUNNING
controller-2 us-west1-c e2-standard-2 10.240.0.12 XX.XXX.XX.XXX RUNNING
worker-0 us-west1-c e2-standard-2 10.240.0.20 XX.XX.XXX.XXX RUNNING
worker-1 us-west1-c e2-standard-2 10.240.0.21 XX.XX.XX.XXX RUNNING
worker-2 us-west1-c e2-standard-2 10.240.0.22 XX.XXX.XX.XX RUNNING
```
## Configuring SSH Access
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as described in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
Test SSH access to the `controller-0` compute instances:
```
gcloud compute ssh controller-0
```
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue:
```
WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
```
At this point the generated SSH keys will be uploaded and stored in your project:
```
Your identification has been saved in /home/$USER/.ssh/google_compute_engine.
Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub.
The key fingerprint is:
SHA256:nz1i8jHmgQuGt+WscqP5SeIaSy5wyIJeL71MuV+QruE $USER@$HOSTNAME
The key's randomart image is:
+---[RSA 2048]----+
| |
| |
| |
| . |
|o. oS |
|=... .o .o o |
|+.+ =+=.+.X o |
|.+ ==O*B.B = . |
| .+.=EB++ o |
+----[SHA256]-----+
Updating project ssh metadata...-Updated [https://www.googleapis.com/compute/v1/projects/$PROJECT_ID].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
```
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
```
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-1042-gcp x86_64)
...
```
Type `exit` at the prompt to exit the `controller-0` compute instance:
```
$USER@controller-0:~$ exit
```
> output
```
logout
Connection to XX.XX.XX.XXX closed
```
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)

View File

@@ -1,14 +1,16 @@
# Provisioning a CA and Generating TLS Certificates
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kubelet, and kube-proxy.
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
## Certificate Authority
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
Create the CA configuration file:
Generate the CA configuration file, certificate, and private key:
```
{
cat > ca-config.json <<EOF
{
"signing": {
@@ -24,11 +26,7 @@ cat > ca-config.json <<EOF
}
}
EOF
```
Create the CA certificate signing request:
```
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
@@ -47,12 +45,10 @@ cat > ca-csr.json <<EOF
]
}
EOF
```
Generate the CA certificate and private key:
```
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
```
Results:
@@ -68,9 +64,11 @@ In this section you will generate client and server certificates for each Kubern
### The Admin Client Certificate
Create the `admin` client certificate signing request:
Generate the `admin` client certificate and private key:
```
{
cat > admin-csr.json <<EOF
{
"CN": "admin",
@@ -89,17 +87,15 @@ cat > admin-csr.json <<EOF
]
}
EOF
```
Generate the `admin` client certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
}
```
Results:
@@ -163,11 +159,57 @@ worker-2-key.pem
worker-2.pem
```
### The kube-proxy Client Certificate
### The Controller Manager Client Certificate
Create the `kube-proxy` client certificate signing request:
Generate the `kube-controller-manager` client certificate and private key:
```
{
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
```
Results:
```
kube-controller-manager-key.pem
kube-controller-manager.pem
```
### The Kube Proxy Client Certificate
Generate the `kube-proxy` client certificate and private key:
```
{
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
@@ -186,17 +228,15 @@ cat > kube-proxy-csr.json <<EOF
]
}
EOF
```
Generate the `kube-proxy` client certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
}
```
Results:
@@ -206,21 +246,65 @@ kube-proxy-key.pem
kube-proxy.pem
```
### The Scheduler Client Certificate
Generate the `kube-scheduler` client certificate and private key:
```
{
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}
```
Results:
```
kube-scheduler-key.pem
kube-scheduler.pem
```
### The Kubernetes API Server Certificate
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
Retrieve the `kubernetes-the-hard-way` static IP address:
Generate the Kubernetes API Server certificate and private key:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
Create the Kubernetes API Server certificate signing request:
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
```
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
@@ -239,20 +323,20 @@ cat > kubernetes-csr.json <<EOF
]
}
EOF
```
Generate the Kubernetes API Server certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
}
```
> The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server) lab.
Results:
```
@@ -260,6 +344,52 @@ kubernetes-key.pem
kubernetes.pem
```
## The Service Account Key Pair
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
Generate the `service-account` certificate and private key:
```
{
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
}
```
Results:
```
service-account-key.pem
service-account.pem
```
## Distribute the Client and Server Certificates
Copy the appropriate certificates and private keys to each worker instance:
@@ -274,10 +404,11 @@ Copy the appropriate certificates and private keys to each controller instance:
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem ${instance}:~/
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem ${instance}:~/
done
```
> The `kube-proxy` and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)

View File

@@ -4,9 +4,7 @@ In this lab you will generate [Kubernetes configuration files](https://kubernete
## Client Authentication Configs
In this section you will generate kubeconfig files for the `kubelet` and `kube-proxy` clients.
> The `scheduler` and `controller manager` access the Kubernetes API Server locally over an insecure API port which does not require authentication. The Kubernetes API Server's insecure port is only enabled for local access.
In this section you will generate kubeconfig files for the `controller manager`, `kubelet`, `kube-proxy`, and `scheduler` clients and the `admin` user.
### Kubernetes Public IP Address
@@ -24,6 +22,8 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-har
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
Generate a kubeconfig file for each worker node:
```
@@ -62,32 +62,137 @@ worker-2.kubeconfig
Generate a kubeconfig file for the `kube-proxy` service:
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
```
```
kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
```
Results:
```
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kube-proxy.kubeconfig
```
### The kube-controller-manager Kubernetes Configuration File
Generate a kubeconfig file for the `kube-controller-manager` service:
```
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
```
Results:
```
kube-controller-manager.kubeconfig
```
### The kube-scheduler Kubernetes Configuration File
Generate a kubeconfig file for the `kube-scheduler` service:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
```
Results:
```
kube-scheduler.kubeconfig
```
### The admin Kubernetes Configuration File
Generate a kubeconfig file for the `admin` user:
```
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
}
```
Results:
```
admin.kubeconfig
```
##
## Distribute the Kubernetes Configuration Files
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
@@ -98,4 +203,12 @@ for instance in worker-0 worker-1 worker-2; do
done
```
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done
```
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)

View File

@@ -1,6 +1,6 @@
# Bootstrapping the etcd Cluster
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/coreos/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/etcd-io/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
## Prerequisites
@@ -10,35 +10,38 @@ The commands in this lab must be run on each controller instance: `controller-0`
gcloud compute ssh controller-0
```
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
## Bootstrapping an etcd Cluster Member
### Download and Install the etcd Binaries
Download the official etcd release binaries from the [coreos/etcd](https://github.com/coreos/etcd) GitHub project:
Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project:
```
wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/v3.2.11/etcd-v3.2.11-linux-amd64.tar.gz"
"https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
```
Extract and install the `etcd` server and the `etcdctl` command line utility:
```
tar -xvf etcd-v3.2.11-linux-amd64.tar.gz
```
```
sudo mv etcd-v3.2.11-linux-amd64/etcd* /usr/local/bin/
{
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
}
```
### Configure the etcd Server
```
sudo mkdir -p /etc/etcd /var/lib/etcd
```
```
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
{
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chmod 700 /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
}
```
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
@@ -57,12 +60,13 @@ ETCD_NAME=$(hostname -s)
Create the `etcd.service` systemd unit file:
```
cat > etcd.service <<EOF
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
@@ -75,7 +79,7 @@ ExecStart=/usr/local/bin/etcd \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,http://127.0.0.1:2379 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
@@ -92,19 +96,11 @@ EOF
### Start the etcd Server
```
sudo mv etcd.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable etcd
```
```
sudo systemctl start etcd
{
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
}
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
@@ -114,15 +110,19 @@ sudo systemctl start etcd
List the etcd cluster members:
```
ETCDCTL_API=3 etcdctl member list
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem
```
> output
```
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379, false
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379, false
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379, false
```
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)

View File

@@ -10,58 +10,79 @@ The commands in this lab must be run on each controller instance: `controller-0`
gcloud compute ssh controller-0
```
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
## Provision the Kubernetes Control Plane
Create the Kubernetes configuration directory:
```
sudo mkdir -p /etc/kubernetes/config
```
### Download and Install the Kubernetes Controller Binaries
Download the official Kubernetes release binaries:
```
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl"
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl"
```
Install the Kubernetes binaries:
```
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
```
```
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
{
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
}
```
### Configure the Kubernetes API Server
```
sudo mkdir -p /var/lib/kubernetes/
{
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
```
```
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem encryption-config.yaml /var/lib/kubernetes/
```
The instance internal IP address will be used advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```
```
REGION=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/project/attributes/google-compute-default-region)
```
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $REGION \
--format 'value(address)')
```
Create the `kube-apiserver.service` systemd unit file:
```
cat > kube-apiserver.service <<EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--admission-control=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
@@ -72,23 +93,22 @@ ExecStart=/usr/local/bin/kube-apiserver \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-swagger-ui=true \\
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--insecure-bind-address=127.0.0.1 \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
--runtime-config='api/all=true' \\
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
--service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-account-issuer=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-ca-file=/var/lib/kubernetes/ca.pem \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
@@ -102,26 +122,33 @@ EOF
### Configure the Kubernetes Controller Manager
Move the `kube-controller-manager` kubeconfig into place:
```
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
```
Create the `kube-controller-manager.service` systemd unit file:
```
cat > kube-controller-manager.service <<EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--bind-address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--master=http://127.0.0.1:8080 \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
@@ -133,18 +160,36 @@ EOF
### Configure the Kubernetes Scheduler
Move the `kube-scheduler` kubeconfig into place:
```
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
```
Create the `kube-scheduler.yaml` configuration file:
```
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1beta1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true
EOF
```
Create the `kube-scheduler.service` systemd unit file:
```
cat > kube-scheduler.service <<EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--leader-elect=true \\
--master=http://127.0.0.1:8080 \\
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
--v=2
Restart=on-failure
RestartSec=5
@@ -157,36 +202,88 @@ EOF
### Start the Controller Services
```
sudo mv kube-apiserver.service kube-scheduler.service kube-controller-manager.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
```
```
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
{
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
}
```
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
### Enable HTTP Health Checks
A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
> The `/healthz` API server endpoint does not require authentication by default.
Install a basic web server to handle HTTP health checks:
```
sudo apt-get update
sudo apt-get install -y nginx
```
```
cat > kubernetes.default.svc.cluster.local <<EOF
server {
listen 80;
server_name kubernetes.default.svc.cluster.local;
location /healthz {
proxy_pass https://127.0.0.1:6443/healthz;
proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
}
}
EOF
```
```
{
sudo mv kubernetes.default.svc.cluster.local \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
}
```
```
sudo systemctl restart nginx
```
```
sudo systemctl enable nginx
```
### Verification
```
kubectl get componentstatuses
kubectl cluster-info --kubeconfig admin.kubeconfig
```
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
Kubernetes control plane is running at https://127.0.0.1:6443
```
Test the nginx HTTP health check proxy:
```
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
```
```
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Sun, 02 May 2021 04:19:29 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
Cache-Control: no-cache, private
X-Content-Type-Options: nosniff
X-Kubernetes-Pf-Flowschema-Uid: c43f32eb-e038-457f-9474-571d43e5c325
X-Kubernetes-Pf-Prioritylevel-Uid: 8ba5908f-5569-4330-80fd-c643e7512366
ok
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
@@ -197,6 +294,8 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
```
gcloud compute ssh controller-0
```
@@ -204,8 +303,8 @@ gcloud compute ssh controller-0
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
```
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
@@ -232,8 +331,8 @@ The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
```
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
@@ -253,35 +352,47 @@ EOF
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer.
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
### Provision a Network Load Balancer
Create the external load balancer network resources:
```
gcloud compute target-pools create kubernetes-target-pool
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
```
gcloud compute http-health-checks create kubernetes \
--description "Kubernetes Health Check" \
--host "kubernetes.default.svc.cluster.local" \
--request-path "/healthz"
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(name)')
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
--allow tcp
```
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
}
```
### Verification
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
Retrieve the `kubernetes-the-hard-way` static IP address:
```
@@ -301,12 +412,12 @@ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
```
{
"major": "1",
"minor": "9",
"gitVersion": "v1.9.0",
"gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05",
"minor": "21",
"gitVersion": "v1.21.0",
"gitCommit": "cb303e613a121a29364f75cc67d3d580833a7479",
"gitTreeState": "clean",
"buildDate": "2017-12-15T20:55:30Z",
"goVersion": "go1.9.2",
"buildDate": "2021-04-08T16:25:06Z",
"goVersion": "go1.16.1",
"compiler": "gc",
"platform": "linux/amd64"
}

View File

@@ -1,6 +1,6 @@
# Bootstrapping the Kubernetes Worker Nodes
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [cri-containerd](https://github.com/kubernetes-incubator/cri-containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
## Prerequisites
@@ -10,25 +10,52 @@ The commands in this lab must be run on each worker instance: `worker-0`, `worke
gcloud compute ssh worker-0
```
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
## Provisioning a Kubernetes Worker Node
Install the OS dependencies:
```
sudo apt-get -y install socat
{
sudo apt-get update
sudo apt-get -y install socat conntrack ipset
}
```
> The socat binary enables support for the `kubectl port-forward` command.
### Disable Swap
By default the kubelet will fail to start if [swap](https://help.ubuntu.com/community/SwapFaq) is enabled. It is [recommended](https://github.com/kubernetes/kubernetes/issues/7294) that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.
Verify if swap is enabled:
```
sudo swapon --show
```
If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
```
sudo swapoff -a
```
> To ensure swap remains off after reboot consult your Linux distro documentation.
### Download and Install Worker Binaries
```
wget -q --show-progress --https-only --timestamping \
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
https://github.com/kubernetes-incubator/cri-containerd/releases/download/v1.0.0-beta.0/cri-containerd-1.0.0-beta.0.linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubelet
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64 \
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz \
https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
```
Create the installation directories:
@@ -46,19 +73,16 @@ sudo mkdir -p \
Install the worker binaries:
```
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
```
```
sudo tar -xvf cri-containerd-1.0.0-beta.0.linux-amd64.tar.gz -C /
```
```
chmod +x kubectl kube-proxy kubelet
```
```
sudo mv kubectl kube-proxy kubelet /usr/local/bin/
{
mkdir containerd
tar -xvf crictl-v1.21.0-linux-amd64.tar.gz
tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
sudo tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
sudo mv runc.amd64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
sudo mv containerd/bin/* /bin/
}
```
### Configure CNI Networking
@@ -73,9 +97,9 @@ POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
Create the `bridge` network configuration file:
```
cat > 10-bridge.conf <<EOF
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.3.1",
"cniVersion": "0.4.0",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
@@ -95,63 +119,118 @@ EOF
Create the `loopback` network configuration file:
```
cat > 99-loopback.conf <<EOF
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.3.1",
"cniVersion": "0.4.0",
"name": "lo",
"type": "loopback"
}
EOF
```
Move the network configuration files to the CNI configuration directory:
### Configure containerd
Create the `containerd` configuration file:
```
sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
sudo mkdir -p /etc/containerd/
```
```
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
[plugins.cri.containerd]
snapshotter = "overlayfs"
[plugins.cri.containerd.default_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
EOF
```
Create the `containerd.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
```
### Configure the Kubelet
```
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
{
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.pem /var/lib/kubernetes/
}
```
```
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
```
Create the `kubelet-config.yaml` configuration file:
```
sudo mv ca.pem /var/lib/kubernetes/
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
```
> The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`.
Create the `kubelet.service` systemd unit file:
```
cat > kubelet.service <<EOF
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=cri-containerd.service
Requires=cri-containerd.service
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--allow-privileged=true \\
--anonymous-auth=false \\
--authorization-mode=Webhook \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--cloud-provider= \\
--cluster-dns=10.32.0.10 \\
--cluster-domain=cluster.local \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/cri-containerd.sock \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--pod-cidr=${POD_CIDR} \\
--register-node=true \\
--runtime-request-timeout=15m \\
--tls-cert-file=/var/lib/kubelet/${HOSTNAME}.pem \\
--tls-private-key-file=/var/lib/kubelet/${HOSTNAME}-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
@@ -167,20 +246,30 @@ EOF
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
```
Create the `kube-proxy-config.yaml` configuration file:
```
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
```
Create the `kube-proxy.service` systemd unit file:
```
cat > kube-proxy.service <<EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--cluster-cidr=10.200.0.0/16 \\
--kubeconfig=/var/lib/kube-proxy/kubeconfig \\
--proxy-mode=iptables \\
--v=2
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
@@ -192,44 +281,33 @@ EOF
### Start the Worker Services
```
sudo mv kubelet.service kube-proxy.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable containerd cri-containerd kubelet kube-proxy
```
```
sudo systemctl start containerd cri-containerd kubelet kube-proxy
{
sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy
}
```
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
## Verification
Login to one of the controller nodes:
```
gcloud compute ssh controller-0
```
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
List the registered Kubernetes nodes:
```
kubectl get nodes
gcloud compute ssh controller-0 \
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
```
> output
```
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 18s v1.9.0
worker-1 Ready <none> 18s v1.9.0
worker-2 Ready <none> 18s v1.9.0
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 22s v1.21.0
worker-1 Ready <none> 22s v1.21.0
worker-2 Ready <none> 22s v1.21.0
```
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)

View File

@@ -8,56 +8,44 @@ In this lab you will generate a kubeconfig file for the `kubectl` command line u
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
Generate a kubeconfig file suitable for authenticating as the `admin` user:
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
```
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
```
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
```
kubectl config use-context kubernetes-the-hard-way
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
}
```
## Verification
Check the health of the remote Kubernetes cluster:
Check the version of the remote Kubernetes cluster:
```
kubectl get componentstatuses
kubectl version
```
> output
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
```
List the nodes in the remote Kubernetes cluster:
@@ -69,10 +57,10 @@ kubectl get nodes
> output
```
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 1m v1.9.0
worker-1 Ready <none> 1m v1.9.0
worker-2 Ready <none> 1m v1.9.0
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 2m35s v1.21.0
worker-1 Ready <none> 2m35s v1.21.0
worker-2 Ready <none> 2m35s v1.21.0
```
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)

View File

@@ -50,8 +50,8 @@ gcloud compute routes list --filter "network: kubernetes-the-hard-way"
```
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
default-route-236a40a8bc992b5b kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
default-route-df77b1e818a56b30 kubernetes-the-hard-way 10.240.0.0/24 1000
default-route-1606ba68df692422 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 0
default-route-615e3652a8b74e4d kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000

View File

@@ -1,22 +1,24 @@
# Deploying the DNS Cluster Add-on
In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery to applications running inside the Kubernetes cluster.
In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery, backed by [CoreDNS](https://coredns.io/), to applications running inside the Kubernetes cluster.
## The DNS Cluster Add-on
Deploy the `kube-dns` cluster add-on:
Deploy the `coredns` cluster add-on:
```
kubectl create -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
```
> output
```
serviceaccount "kube-dns" created
configmap "kube-dns" created
service "kube-dns" created
deployment "kube-dns" created
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
```
List the pods created by the `kube-dns` deployment:
@@ -28,9 +30,9 @@ kubectl get pods -l k8s-app=kube-dns -n kube-system
> output
```
NAME READY STATUS RESTARTS AGE
kube-dns-3097350089-gq015 3/3 Running 0 20s
kube-dns-3097350089-q64qc 3/3 Running 0 20s
NAME READY STATUS RESTARTS AGE
coredns-8494f9c688-hh7r2 1/1 Running 0 10s
coredns-8494f9c688-zqrj2 1/1 Running 0 10s
```
## Verification
@@ -38,7 +40,7 @@ kube-dns-3097350089-q64qc 3/3 Running 0 20s
Create a `busybox` deployment:
```
kubectl run busybox --image=busybox --command -- sleep 3600
kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
```
List the pod created by the `busybox` deployment:
@@ -50,8 +52,8 @@ kubectl get pods -l run=busybox
> output
```
NAME READY STATUS RESTARTS AGE
busybox-2125412808-mt2vb 1/1 Running 0 15s
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 3s
```
Retrieve the full name of the `busybox` pod:

View File

@@ -17,7 +17,12 @@ Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
```
gcloud compute ssh controller-0 \
--command "ETCDCTL_API=3 etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
--command "sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem\
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
```
> output
@@ -27,18 +32,25 @@ gcloud compute ssh controller-0 \
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a ea 7c 76 32 43 62 6f |:v1:key1:.|v2Cbo|
00000050 44 02 02 8c b7 ca fe 95 a5 33 f6 a1 18 6c 3d 53 |D........3...l=S|
00000060 e7 9c 51 ee 32 f6 e4 17 ea bb 11 d5 2f e2 40 00 |..Q.2......./.@.|
00000070 ae cf d9 e7 ba 7f 68 18 d3 c1 10 10 93 43 35 bd |......h......C5.|
00000080 24 dd 66 b4 f8 f9 82 77 4a d5 78 03 19 41 1e bc |$.f....wJ.x..A..|
00000090 94 3f 17 41 ad cc 8c ba 9f 8f 8e 56 97 7e 96 fb |.?.A.......V.~..|
000000a0 8f 2e 6a a5 bf 08 1f 0b c3 4b 2b 93 d1 ec f8 70 |..j......K+....p|
000000b0 c1 e4 1d 1a d2 0d f8 74 3a a1 4f 3c e0 c9 6d 3f |.......t:.O<..m?|
000000c0 de a3 f5 fd 76 aa 5e bc 27 d9 3c 6b 8f 54 97 45 |....v.^.'.<k.T.E|
000000d0 31 25 ff 23 90 a4 2a f2 db 78 b1 3b ca 21 f3 6b |1%.#..*..x.;.!.k|
000000e0 dd fb 8e 53 c6 23 0d 35 c8 0a |...S.#.5..|
000000ea
00000040 3a 76 31 3a 6b 65 79 31 3a 97 d1 2c cd 89 0d 08 |:v1:key1:..,....|
00000050 29 3c 7d 19 41 cb ea d7 3d 50 45 88 82 a3 1f 11 |)<}.A...=PE.....|
00000060 26 cb 43 2e c8 cf 73 7d 34 7e b1 7f 9f 71 d2 51 |&.C...s}4~...q.Q|
00000070 45 05 16 e9 07 d4 62 af f8 2e 6d 4a cf c8 e8 75 |E.....b...mJ...u|
00000080 6b 75 1e b7 64 db 7d 7f fd f3 96 62 e2 a7 ce 22 |ku..d.}....b..."|
00000090 2b 2a 82 01 c3 f5 83 ae 12 8b d5 1d 2e e6 a9 90 |+*..............|
000000a0 bd f0 23 6c 0c 55 e2 52 18 78 fe bf 6d 76 ea 98 |..#l.U.R.x..mv..|
000000b0 fc 2c 17 36 e3 40 87 15 25 13 be d6 04 88 68 5b |.,.6.@..%.....h[|
000000c0 a4 16 81 f6 8e 3b 10 46 cb 2c ba 21 35 0c 5b 49 |.....;.F.,.!5.[I|
000000d0 e5 27 20 4c b3 8e 6b d0 91 c2 28 f1 cc fa 6a 1b |.' L..k...(...j.|
000000e0 31 19 74 e7 a5 66 6a 99 1c 84 c7 e0 b0 fc 32 86 |1.t..fj.......2.|
000000f0 f3 29 5a a4 1c d5 a4 e3 63 26 90 95 1e 27 d0 14 |.)Z.....c&...'..|
00000100 94 f0 ac 1a cd 0d b9 4b ae 32 02 a0 f8 b7 3f 0b |.......K.2....?.|
00000110 6f ad 1f 4d 15 8a d6 68 95 63 cf 7d 04 9a 52 71 |o..M...h.c.}..Rq|
00000120 75 ff 87 6b c5 42 e1 72 27 b5 e9 1a fe e8 c0 3f |u..k.B.r'......?|
00000130 d9 04 5e eb 5d 43 0d 90 ce fa 04 a8 4a b0 aa 01 |..^.]C......J...|
00000140 cf 6d 5b 80 70 5b 99 3c d6 5c c0 dc d1 f5 52 4a |.m[.p[.<.\....RJ|
00000150 2c 2d 28 5a 63 57 8e 4f df 0a |,-(ZcW.O..|
0000015a
```
The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
@@ -50,20 +62,20 @@ In this section you will verify the ability to create and manage [Deployments](h
Create a deployment for the [nginx](https://nginx.org/en/) web server:
```
kubectl run nginx --image=nginx
kubectl create deployment nginx --image=nginx
```
List the pod created by the `nginx` deployment:
```
kubectl get pods -l run=nginx
kubectl get pods -l app=nginx
```
> output
```
NAME READY STATUS RESTARTS AGE
nginx-4217019353-b5gzn 1/1 Running 0 15s
NAME READY STATUS RESTARTS AGE
nginx-f89759699-kpn5m 1/1 Running 0 10s
```
### Port Forwarding
@@ -73,7 +85,7 @@ In this section you will verify the ability to access applications remotely usin
Retrieve the full name of the `nginx` pod:
```
POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
```
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
@@ -99,13 +111,13 @@ curl --head http://127.0.0.1:8080
```
HTTP/1.1 200 OK
Server: nginx/1.13.7
Date: Mon, 18 Dec 2017 14:50:36 GMT
Server: nginx/1.19.10
Date: Sun, 02 May 2021 05:29:25 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "5a1437f4-264"
ETag: "6075b537-264"
Accept-Ranges: bytes
```
@@ -131,7 +143,8 @@ kubectl logs $POD_NAME
> output
```
127.0.0.1 - - [18/Dec/2017:14:50:36 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"
...
127.0.0.1 - - [02/May/2021:05:29:25 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-"
```
### Exec
@@ -147,7 +160,7 @@ kubectl exec -ti $POD_NAME -- nginx -v
> output
```
nginx version: nginx/1.13.7
nginx version: nginx/1.19.10
```
## Services
@@ -194,13 +207,13 @@ curl -I http://${EXTERNAL_IP}:${NODE_PORT}
```
HTTP/1.1 200 OK
Server: nginx/1.13.7
Date: Mon, 18 Dec 2017 14:52:09 GMT
Server: nginx/1.19.10
Date: Sun, 02 May 2021 05:31:52 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "5a1437f4-264"
ETag: "6075b537-264"
Accept-Ranges: bytes
```

View File

@@ -1,6 +1,6 @@
# Cleaning Up
In this labs you will delete the compute resources created during this tutorial.
In this lab you will delete the compute resources created during this tutorial.
## Compute Instances
@@ -9,7 +9,8 @@ Delete the controller and worker compute instances:
```
gcloud -q compute instances delete \
controller-0 controller-1 controller-2 \
worker-0 worker-1 worker-2
worker-0 worker-1 worker-2 \
--zone $(gcloud config get-value compute/zone)
```
## Networking
@@ -17,18 +18,16 @@ gcloud -q compute instances delete \
Delete the external load balancer network resources:
```
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region)
```
{
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region)
```
gcloud -q compute target-pools delete kubernetes-target-pool
```
gcloud -q compute target-pools delete kubernetes-target-pool
Delete the `kubernetes-the-hard-way` static IP address:
gcloud -q compute http-health-checks delete kubernetes
```
gcloud -q compute addresses delete kubernetes-the-hard-way
gcloud -q compute addresses delete kubernetes-the-hard-way
}
```
Delete the `kubernetes-the-hard-way` firewall rules:
@@ -37,26 +36,28 @@ Delete the `kubernetes-the-hard-way` firewall rules:
gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \
kubernetes-the-hard-way-allow-external
```
Delete the Pod network routes:
```
gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
```
Delete the `kubernetes` subnet:
```
gcloud -q compute networks subnets delete kubernetes
kubernetes-the-hard-way-allow-external \
kubernetes-the-hard-way-allow-health-check
```
Delete the `kubernetes-the-hard-way` network VPC:
```
gcloud -q compute networks delete kubernetes-the-hard-way
{
gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
gcloud -q compute networks subnets delete kubernetes
gcloud -q compute networks delete kubernetes-the-hard-way
}
```
Delete the `kubernetes-the-hard-way` compute address:
```
gcloud -q compute addresses delete kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region)
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB