25 Commits

Author SHA1 Message Date
Kelsey Hightower
b69bf94b14 Remove cloud provider and move to ARM64 2023-11-10 19:42:42 -08:00
Kelsey Hightower
79a3f79b27 Update to Kubernetes 1.21.0 2021-05-01 23:11:57 -07:00
Kelsey Hightower
ca96371e4d Update to Kubernetes 1.18.6 2020-07-18 00:42:18 -07:00
Kelsey Hightower
5c462220b7 Update to Kubernetes 1.15.3 2019-09-15 12:10:26 -07:00
Kelsey Hightower
bf2850974e Update to Kubernetes 1.12.0 and add CoreDNS support 2018-09-30 19:35:05 +00:00
Kelsey Hightower
b974042d95 Update to Kubernetes 1.10.2 and add gVisor support 2018-05-14 14:06:01 +00:00
Daniel J. Pritchett
4f5cecb5ed Add brief contribution guide 2018-01-30 07:38:21 -08:00
Martin Mosegaard Amdisen
68ebb95898 Fix minor typo in lab 14 2018-01-30 05:45:35 -08:00
Mark Vincze
8db0280ef6 Fix small typo 2018-01-30 05:45:16 -08:00
Andrew Thompson
a5c83f1cee Argument syntax for Filter deprecated.
Operator evaluation is changing for consistency across Google APIs.
2018-01-30 05:44:52 -08:00
githubixx
e2dbdaa66b core workload api incl. Deployment now stable 2018-01-30 05:44:05 -08:00
Andrew Lytvynov
05ca2d04cf Correct kubectl output for kube-dns pod list
https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml will create 1 kube-dns pod without autoscaler.
This may confuse the reader.
2018-01-30 05:42:28 -08:00
Phil Estes
f46642a6ba Update cri-containerd location and release
cri-containerd project is moving as a sub-project of containerd itself;
the GitHub migration is complete. Also, as of 9 Jan, the beta.1 release
of cri-containerd is available so the download link is updated.

Signed-off-by: Phil Estes <estesp@gmail.com>
2018-01-30 05:41:55 -08:00
Kelsey Hightower
63ff9932d9 update kubelet flags 2017-12-18 07:20:15 -08:00
Kelsey Hightower
af9f6d71fc update kubernetes GitHub location 2017-12-18 07:07:54 -08:00
Kelsey Hightower
2c78297922 rename to Google Kubernetes Engine 2017-12-18 06:58:00 -08:00
Kelsey Hightower
07aae4fb45 update to kubernetes 1.9 2017-12-18 06:53:32 -08:00
Kelsey Hightower
e8d728d016 remove remote access to insecure port 2017-10-02 06:48:09 -07:00
Kelsey Hightower
765c1fb5fa remove remote access to insecure port 2017-10-02 06:46:01 -07:00
Kelsey Hightower
ede3437ee8 update to kubernetes 1.8 2017-10-01 20:37:09 -07:00
Steven Trescinski
7f7fd71874 Fixed '--service-cluster-ip-range' subnet for Controller Manager 2017-10-01 12:11:33 -07:00
Kalimar Maia
51e8709080 Instructions for having a default configuration. 2017-10-01 12:11:05 -07:00
Frank Ederveen
92772d2f69 226: use curl for OSX downloads 2017-10-01 12:10:39 -07:00
Leonardo Faoro
b7550ca7ab remove trailing space 2017-09-04 16:08:43 -07:00
Leonardo Faoro
4441278561 remove trailing-spaces and blank lines 2017-09-04 16:08:43 -07:00
37 changed files with 1528 additions and 1600 deletions

51
.gitignore vendored Normal file
View File

@@ -0,0 +1,51 @@
admin-csr.json
admin-key.pem
admin.csr
admin.pem
admin.kubeconfig
ca-config.json
ca-csr.json
ca-key.pem
ca.csr
ca.pem
encryption-config.yaml
kube-controller-manager-csr.json
kube-controller-manager-key.pem
kube-controller-manager.csr
kube-controller-manager.kubeconfig
kube-controller-manager.pem
kube-scheduler-csr.json
kube-scheduler-key.pem
kube-scheduler.csr
kube-scheduler.kubeconfig
kube-scheduler.pem
kube-proxy-csr.json
kube-proxy-key.pem
kube-proxy.csr
kube-proxy.kubeconfig
kube-proxy.pem
kubernetes-csr.json
kubernetes-key.pem
kubernetes.csr
kubernetes.pem
worker-0-csr.json
worker-0-key.pem
worker-0.csr
worker-0.kubeconfig
worker-0.pem
worker-1-csr.json
worker-1-key.pem
worker-1.csr
worker-1.kubeconfig
worker-1.pem
worker-2-csr.json
worker-2-key.pem
worker-2.csr
worker-2.kubeconfig
worker-2.pem
service-account-key.pem
service-account.csr
service-account.pem
service-account-csr.json
*.swp
.idea/

18
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,18 @@
This project is made possible by contributors like YOU! While all contributions are welcomed, please be sure and follow the following suggestions to help your PR get merged.
## License
This project uses an [Apache license](LICENSE). Be sure you're comfortable with the implications of that before working up a patch.
## Review and merge process
Review and merge duties are managed by [@kelseyhightower](https://github.com/kelseyhightower). Expect some burden of proof for demonstrating the marginal value of adding new content to the tutorial.
Here are some examples of the review and justification process:
- [#208](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/208)
- [#282](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/282)
## Notes on minutiae
If you find a bug that breaks the guide, please do submit it. If you are considering a minor copy edit for tone, grammar, or simple inconsistent whitespace, consider the tradeoff between maintainer time and community benefit before investing too much of your time.

3
COPYRIGHT.md Normal file
View File

@@ -0,0 +1,3 @@
# Copyright
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>

View File

@@ -1,30 +1,35 @@
# Kubernetes The Hard Way
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Container Engine](https://cloud.google.com/container-engine), or the [Getting Started Guides](http://kubernetes.io/docs/getting-started-guides/).
Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for someone looking for a fully automated tool to bring up a Kubernetes cluster. Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
> The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!
## Copyright
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
## Target Audience
The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.
The target audience for this tutorial is someone who wants to understand the fundamentals of Kubernetes and how the core components fit together.
## Cluster Details
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
Kubernetes The Hard Way guides you through bootstrapping a basic Kubernetes cluster with all control plane components running on a single node, and two worker nodes, which is enough to learn the core concepts.
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.7.4
* [CRI-O Container Runtime](https://github.com/kubernetes-incubator/cri-o) v1.0.0-beta.0
* [CNI Container Networking](https://github.com/containernetworking/cni) v0.6.0
* [etcd](https://github.com/coreos/etcd) 3.2.6
Component versions:
* [kubernetes](https://github.com/kubernetes/kubernetes) v1.28.x
* [containerd](https://github.com/containerd/containerd) v1.7.x
* [cni](https://github.com/containernetworking/cni) v1.3.x
* [etcd](https://github.com/etcd-io/etcd) v3.4.x
## Labs
This tutorial assumes you have access to the [Google Cloud Platform](https://cloud.google.com). While GCP is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms.
This tutorial requires four (4) ARM64 based virtual or physical machines connected to the same network. While ARM64 based machines are used for the tutorial, the lessons learned can be applied to other platforms.
* [Prerequisites](docs/01-prerequisites.md)
* [Installing the Client Tools](docs/02-client-tools.md)
* [Setting up the Jumpbox](docs/02-jumpbox.md)
* [Provisioning Compute Resources](docs/03-compute-resources.md)
* [Provisioning the CA and Generating TLS Certificates](docs/04-certificate-authority.md)
* [Generating Kubernetes Configuration Files for Authentication](docs/05-kubernetes-configuration-files.md)
@@ -34,6 +39,5 @@ This tutorial assumes you have access to the [Google Cloud Platform](https://clo
* [Bootstrapping the Kubernetes Worker Nodes](docs/09-bootstrapping-kubernetes-workers.md)
* [Configuring kubectl for Remote Access](docs/10-configuring-kubectl.md)
* [Provisioning Pod Network Routes](docs/11-pod-network-routes.md)
* [Deploying the DNS Cluster Add-on](docs/12-dns-addon.md)
* [Smoke Test](docs/13-smoke-test.md)
* [Cleaning Up](docs/14-cleanup.md)
* [Smoke Test](docs/12-smoke-test.md)
* [Cleaning Up](docs/13-cleanup.md)

206
ca.conf Normal file
View File

@@ -0,0 +1,206 @@
[req]
distinguished_name = req_distinguished_name
prompt = no
x509_extensions = ca_x509_extensions
[ca_x509_extensions]
basicConstraints = CA:TRUE
keyUsage = cRLSign, keyCertSign
[req_distinguished_name]
C = US
ST = Washington
L = Seattle
CN = CA
[admin]
distinguished_name = admin_distinguished_name
prompt = no
req_extensions = default_req_extensions
[admin_distinguished_name]
CN = admin
O = system:masters
# Service Accounts
#
# The Kubernetes Controller Manager leverages a key pair to generate
# and sign service account tokens as described in the
# [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/)
# documentation.
[service-accounts]
distinguished_name = service-accounts_distinguished_name
prompt = no
req_extensions = default_req_extensions
[service-accounts_distinguished_name]
CN = service-accounts
# Worker Nodes
#
# Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/admin/authorization/node/)
# called Node Authorizer, that specifically authorizes API requests made
# by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet).
# In order to be authorized by the Node Authorizer, Kubelets must use a credential
# that identifies them as being in the `system:nodes` group, with a username
# of `system:node:<nodeName>`.
[node-0]
distinguished_name = node-0_distinguished_name
prompt = no
req_extensions = node-0_req_extensions
[node-0_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Node-0 Certificate"
subjectAltName = DNS:node-0, IP:127.0.0.1
subjectKeyIdentifier = hash
[node-0_distinguished_name]
CN = system:node:node-0
O = system:nodes
C = US
ST = Washington
L = Seattle
[node-1]
distinguished_name = node-1_distinguished_name
prompt = no
req_extensions = node-1_req_extensions
[node-1_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Node-1 Certificate"
subjectAltName = DNS:node-1, IP:127.0.0.1
subjectKeyIdentifier = hash
[node-1_distinguished_name]
CN = system:node:node-1
O = system:nodes
C = US
ST = Washington
L = Seattle
# Kube Proxy Section
[kube-proxy]
distinguished_name = kube-proxy_distinguished_name
prompt = no
req_extensions = kube-proxy_req_extensions
[kube-proxy_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Kube Proxy Certificate"
subjectAltName = DNS:kube-proxy, IP:127.0.0.1
subjectKeyIdentifier = hash
[kube-proxy_distinguished_name]
CN = system:kube-proxy
O = system:node-proxier
C = US
ST = Washington
L = Seattle
# Controller Manager
[kube-controller-manager]
distinguished_name = kube-controller-manager_distinguished_name
prompt = no
req_extensions = kube-controller-manager_req_extensions
[kube-controller-manager_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Kube Controller Manager Certificate"
subjectAltName = DNS:kube-proxy, IP:127.0.0.1
subjectKeyIdentifier = hash
[kube-controller-manager_distinguished_name]
CN = system:kube-controller-manager
O = system:kube-controller-manager
C = US
ST = Washington
L = Seattle
# Scheduler
[kube-scheduler]
distinguished_name = kube-scheduler_distinguished_name
prompt = no
req_extensions = kube-scheduler_req_extensions
[kube-scheduler_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Kube Scheduler Certificate"
subjectAltName = DNS:kube-scheduler, IP:127.0.0.1
subjectKeyIdentifier = hash
[kube-scheduler_distinguished_name]
CN = system:kube-scheduler
O = system:system:kube-scheduler
C = US
ST = Washington
L = Seattle
# API Server
#
# The Kubernetes API server is automatically assigned the `kubernetes`
# internal dns name, which will be linked to the first IP address (`10.32.0.1`)
# from the address range (`10.32.0.0/24`) reserved for internal cluster
# services.
[kube-api-server]
distinguished_name = kube-api-server_distinguished_name
prompt = no
req_extensions = kube-api-server_req_extensions
[kube-api-server_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth, serverAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Kube Scheduler Certificate"
subjectAltName = @kube-api-server_alt_names
subjectKeyIdentifier = hash
[kube-api-server_alt_names]
IP.0 = 127.0.0.1
IP.1 = 10.32.0.1
DNS.0 = kubernetes
DNS.1 = kubernetes.default
DNS.2 = kubernetes.default.svc
DNS.3 = kubernetes.default.svc.cluster
DNS.4 = kubernetes.svc.cluster.local
DNS.5 = server.kubernetes.local
DNS.6 = api-server.kubernetes.local
[kube-api-server_distinguished_name]
CN = kubernetes
C = US
ST = Washington
L = Seattle
[default_req_extensions]
basicConstraints = CA:FALSE
extendedKeyUsage = clientAuth
keyUsage = critical, digitalSignature, keyEncipherment
nsCertType = client
nsComment = "Admin Client Certificate"
subjectKeyIdentifier = hash

15
configs/10-bridge.conf Normal file
View File

@@ -0,0 +1,15 @@
{
"cniVersion": "1.0.0",
"name": "bridge",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "SUBNET"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
}

5
configs/99-loopback.conf Normal file
View File

@@ -0,0 +1,5 @@
{
"cniVersion": "1.1.0",
"name": "lo",
"type": "loopback"
}

View File

@@ -0,0 +1,13 @@
version = 2
[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "overlayfs"
default_runtime_name = "runc"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"

View File

@@ -0,0 +1,33 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes

View File

@@ -0,0 +1,6 @@
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"

View File

@@ -0,0 +1,6 @@
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
leaderElect: true

View File

@@ -0,0 +1,21 @@
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubelet/ca.crt"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
cgroupDriver: systemd
containerRuntimeEndpoint: "unix:///var/run/containerd/containerd.sock"
podCIDR: "SUBNET"
resolvConf: "/etc/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/kubelet.crt"
tlsPrivateKeyFile: "/var/lib/kubelet/kubelet.key"

View File

@@ -1,192 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
clusterIP: 10.32.0.10
ports:
- name: dns
port: 53
protocol: UDP
targetPort: 53
- name: dns-tcp
port: 53
protocol: TCP
targetPort: 53
selector:
k8s-app: kube-dns
sessionAffinity: None
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
name: kube-dns
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
k8s-app: kube-dns
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
k8s-app: kube-dns
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
env:
- name: PROMETHEUS_PORT
value: "10055"
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
ports:
- name: dns-local
containerPort: 10053
protocol: UDP
- name: dns-tcp-local
containerPort: 10053
protocol: TCP
- name: metrics
containerPort: 10055
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
ports:
- name: dns
containerPort: 53
protocol: UDP
- name: dns-tcp
containerPort: 53
protocol: TCP
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
livenessProbe:
failureThreshold: 5
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
ports:
- name: metrics
containerPort: 10054
protocol: TCP
resources:
requests:
cpu: 10m
memory: 20Mi
dnsPolicy: Default
restartPolicy: Always
serviceAccount: kube-dns
serviceAccountName: kube-dns
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
volumes:
- name: kube-dns-config
configMap:
defaultMode: 420
name: kube-dns
optional: true

View File

@@ -1,41 +1,30 @@
# Prerequisites
## Google Cloud Platform
In this lab you will review the machine requirements necessary to follow this tutorial.
This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits.
## Virtual or Physical Machines
[Estimated cost](https://cloud.google.com/products/calculator/#id=78df6ced-9c50-48f8-a670-bc5003f2ddaa) to run this tutorial: $0.22 per hour ($5.39 per day).
This tutorial requires four (4) virtual or physical ARM64 machines running Debian 12 (bookworm). The follow table list the four machines and thier CPU, memory, and storage requirements.
> The compute resources required for this tutorial exceed the Google Cloud Platform free tier.
| Name | Description | CPU | RAM | Storage |
|---------|------------------------|-----|-------|---------|
| jumpbox | Administration host | 1 | 512MB | 10GB |
| server | Kubernetes server | 1 | 2GB | 20GB |
| node-0 | Kubernetes worker node | 1 | 2GB | 20GB |
| node-1 | Kubernetes worker node | 1 | 2GB | 20GB |
## Google Cloud Platform SDK
How you provision the machines is up to you, the only requirement is that each machine meet the above system requirements including the machine specs and OS version. Once you have all four machine provisioned, verify the system requirements by running the `uname` command on each machine:
### Install the Google Cloud SDK
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
Verify the Google Cloud SDK version is 169.0.0 or higher:
```
gcloud version
```bash
uname -mov
```
### Set a Default Compute Region and Zone
After running the `uname` command you should see the following output:
This tutorial assumes a default compute region and zone have been configured.
Set a default compute region:
```
gcloud config set compute/region us-west1
```text
#1 SMP Debian 6.1.55-1 (2023-09-29) aarch64 GNU/Linux
```
Set a default compute zone:
You maybe surprised to see `aarch64` here, but that is the official name for the Arm Architecture 64-bit instruction set. You will often see `arm64` used by Apple, and the maintainers of the Linux kernel, when referring to support for `aarch64`. This tutorial will use `arm64` consistently throughout to avoid confusion.
```
gcloud config set compute/zone us-west1-c
```
> Use the `gcloud compute zones list` command to view additional regions and zones.
Next: [Installing the Client Tools](02-client-tools.md)
Next: [setting-up-the-jumpbox](02-jumpbox.md)

View File

@@ -1,116 +0,0 @@
# Installing the Client Tools
In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl).
## Install CFSSL
The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
Download and install `cfssl` and `cfssljson` from the [cfssl repository](https://pkg.cfssl.org):
### OS X
```
wget -q --show-progress --https-only --timestamping \
https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64 \
https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
```
```
chmod +x cfssl_darwin-amd64 cfssljson_darwin-amd64
```
```
sudo mv cfssl_darwin-amd64 /usr/local/bin/cfssl
```
```
sudo mv cfssljson_darwin-amd64 /usr/local/bin/cfssljson
```
### Linux
```
wget -q --show-progress --https-only --timestamping \
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
```
```
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
```
```
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
```
```
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
```
### Verification
Verify `cfssl` version 1.2.0 or higher is installed:
```
cfssl version
```
> output
```
Version: 1.2.0
Revision: dev
Runtime: go1.6
```
> The cfssljson command line utility does not provide a way to print its version.
## Install kubectl
The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
### OS X
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/darwin/amd64/kubectl
```
```
chmod +x kubectl
```
```
sudo mv kubectl /usr/local/bin/
```
### Linux
```
wget https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl
```
```
chmod +x kubectl
```
```
sudo mv kubectl /usr/local/bin/
```
### Verification
Verify `kubectl` version 1.7.4 or higher is installed:
```
kubectl version --client
```
> output
```
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
```
Next: [Provisioning Compute Resources](03-compute-resources.md)

121
docs/02-jumpbox.md Normal file
View File

@@ -0,0 +1,121 @@
# Set Up The Jumpbox
In this lab you will set up one of the four machines to be a `jumpbox`. This machine will be used to run commands in this tutorial. While a dedicated machine is being used to ensure consistency, these commands can also be run from just about any machine including your personal workstation running macOS or Linux.
Think of the `jumpbox` as the administration machine that you will use as a home base when setting up your Kubernetes cluster from the ground up. One thing we need to do before we get started is install a few command line utilities and clone the Kubernetes The Hard Way git repository, which contains some additional configuration files that will be used to configure various Kubernetes components throughout this tutorial.
Log in to the `jumpbox`:
```bash
ssh root@jumpbox
```
All commands will be run as the `root` user. This is being done for the sake of convenience, and will help reduce the number of commands required to set everything up.
### Install Command Line Utilities
Now that you are logged into the `jumpbox` machine as the `root` user, you will install the command line utilities that will be used to preform various tasks throughout the tutorial.
```bash
apt-get -y install wget curl vim openssl git
```
### Sync GitHub Repository
Now it's time to download a copy of this tutorial which contains the configuration files and templates that will be used build your Kubernetes cluster from the ground up. Clone the Kubernetes The Hard Way git repository using the `git` command:
```bash
git clone --depth 1 \
https://github.com/kelseyhightower/kubernetes-the-hard-way.git
```
Change into the `kubernetes-the-hard-way` directory:
```bash
cd kubernetes-the-hard-way
```
This will be the working directory for the rest of the tutorial. If you ever get lost run the `pwd` command to verify you are in the right directory when running commands on the `jumpbox`:
```bash
pwd
```
```text
/root/kubernetes-the-hard-way
```
### Download Binaries
In this section you will download the binaries for the various Kubernetes components. The binaries will be stored in the `downloads` directory on the `jumpbox`, which will reduce the amount of internet bandwidth required to complete this tutorial as we avoid downloading the binaries multiple times for each machine in our Kubernetes cluster.
From the `kubernetes-the-hard-way` directory create a `downloads` directory using the `mkdir` command:
```bash
mkdir downloads
```
The binaries that will be downloaded are listed in the `downloads.txt` file, which you can review using the `cat` command:
```bash
cat downloads.txt
```
Download the binaries listed in the `downloads.txt` file using the `wget` command:
```bash
wget -q --show-progress \
--https-only \
--timestamping \
-P downloads \
-i downloads.txt
```
Depending on your internet connection speed it may take a while to download the `584` megabytes of binaries, and once the download is complete, you can list them using the `ls` command:
```bash
ls -loh downloads
```
```text
total 584M
-rw-r--r-- 1 root 41M May 9 13:35 cni-plugins-linux-arm64-v1.3.0.tgz
-rw-r--r-- 1 root 34M Oct 26 15:21 containerd-1.7.8-linux-arm64.tar.gz
-rw-r--r-- 1 root 22M Aug 14 00:19 crictl-v1.28.0-linux-arm.tar.gz
-rw-r--r-- 1 root 15M Jul 11 02:30 etcd-v3.4.27-linux-arm64.tar.gz
-rw-r--r-- 1 root 111M Oct 18 07:34 kube-apiserver
-rw-r--r-- 1 root 107M Oct 18 07:34 kube-controller-manager
-rw-r--r-- 1 root 51M Oct 18 07:34 kube-proxy
-rw-r--r-- 1 root 52M Oct 18 07:34 kube-scheduler
-rw-r--r-- 1 root 46M Oct 18 07:34 kubectl
-rw-r--r-- 1 root 101M Oct 18 07:34 kubelet
-rw-r--r-- 1 root 9.6M Aug 10 18:57 runc.arm64
```
### Install kubectl
In this section you will install the `kubectl`, the official Kubernetes client command line tool, on the `jumpbox` machine. `kubectl will be used to interact with the Kubernetes control once your cluster is provisioned later in this tutorial.
Use the `chmod` command to make the `kubectl` binary executable and move it to the `/usr/local/bin/` directory:
```bash
{
chmod +x downloads/kubectl
cp downloads/kubectl /usr/local/bin/
}
```
At this point `kubectl` is installed and can be verified by running the `kubectl` command:
```bash
kubectl version --client
```
```text
Client Version: v1.28.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
```
At this point the `jumpbox` has been set up with all the command line tools and utilities necessary to complete the labs in this tutorial.
Next: [Provisioning Compute Resources](03-compute-resources.md)

View File

@@ -1,172 +1,225 @@
# Provisioning Compute Resources
Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single [compute zone](https://cloud.google.com/compute/docs/regions-zones/regions-zones).
Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the machines required for setting up a Kubernetes cluster.
> Ensure a default compute zone and region have been set as described in the [Prerequisites](01-prerequisites.md#set-a-default-compute-region-and-zone) lab.
## Machine Database
## Networking
This tutorial will leverage a text file, which will serve as a machine database, to store the various machine attributes that will be used when setting up the Kubernetes control plane and worker nodes. The following schema represents entries in the machine database, one entry per line:
The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model) assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) can limit how groups of containers are allowed to communicate with each other and external network endpoints.
> Setting up network policies is out of scope for this tutorial.
### Virtual Private Cloud Network
In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) (VPC) network will be setup to host the Kubernetes cluster.
Create the `kubernetes-the-hard-way` custom VPC network:
```
gcloud compute networks create kubernetes-the-hard-way --mode custom
```text
IPV4_ADDRESS FQDN HOSTNAME POD_SUBNET
```
A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
Each of the columns corresponds to a machine IP address `IPV4_ADDRESS`, fully qualified domain name `FQDN`, host name `HOSTNAME`, and the IP subnet `POD_SUBNET`. Kubernetes assigns one IP address per `pod` and the `POD_SUBNET` represents the unique IP address range assigned to each machine in the cluster for doing so.
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
Here is an example machine database similar to the one used when creating this tutorial. Notice the IP addresses have been masked out. Your machines can be assigned any IP address as long as each machine is reachable from each other and the `jumpbox`.
```
gcloud compute networks subnets create kubernetes \
--network kubernetes-the-hard-way \
--range 10.240.0.0/24
```bash
cat machines.txt
```
> The `10.240.0.0/24` IP address range can host up to 254 compute instances.
### Firewall Rules
Create a firewall rule that allows internal communication across all protocols:
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
--allow tcp,udp,icmp \
--network kubernetes-the-hard-way \
--source-ranges 10.240.0.0/24,10.200.0.0/16
```text
XXX.XXX.XXX.XXX server.kubernetes.local server
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0 10.200.0.0/24
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1 10.200.1.0/24
```
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
Now it's your turn to create a `machines.txt` file with the details for the three machines you will be using to create your Kubernetes cluster. Use the example machine database from above and add the details for your machines.
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
--allow tcp:22,tcp:6443,icmp \
--network kubernetes-the-hard-way \
--source-ranges 0.0.0.0/0
## Configuring SSH Access
SSH will be used to configure the machines in the cluster. Verify that you have `root` SSH access to each machine listed in your machine database. You may need to enable root SSH access on each node by updating the sshd_config file and restarting the SSH server.
### Enable root SSH Access
If `root` SSH access is enabled for each of your machines you can skip this section.
By default, a new `debian` install disables SSH access for the `root` user. This is done for security reasons as the `root` user is a well known user on Linux systems, and if a weak password is used on a machine connected to the internet, well, let's just say it's only a matter of time before your machine belongs to someone else. As mention earlier, we are going to enable `root` access over SSH in order to streamline the steps in this tutorial. Security is a tradeoff, and in this case, we are optimizing for convenience. On each machine login via SSH using your user account, then switch to the `root` user using the `su` command:
```bash
su - root
```
Create a firewall rule that allows health check probes from the GCP [network load balancer IP ranges](https://cloud.google.com/compute/docs/load-balancing/network/#firewall_rules_and_network_load_balancing):
Edit the `/etc/ssh/sshd_config` SSH daemon configuration file and the `PermitRootLogin` option to `yes`:
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-checks \
--allow tcp:8080 \
--network kubernetes-the-hard-way \
--source-ranges 209.85.204.0/22,209.85.152.0/22,35.191.0.0/16
```bash
sed -i \
's/^#PermitRootLogin.*/PermitRootLogin yes/' \
/etc/ssh/sshd_config
```
> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients.
Restart the `sshd` SSH server to pick up the updated configuration file:
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute firewall-rules list --filter "network kubernetes-the-hard-way"
```bash
systemctl restart sshd
```
> output
### Generate and Distribute SSH Keys
```
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp
kubernetes-the-hard-way-allow-health-checks kubernetes-the-hard-way INGRESS 1000 tcp:8080
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp
In this section you will generate and distribute an SSH keypair to the `server`, `node-0`, and `node-1`, machines, which will be used to run commands on those machines throughout this tutorial. Run the following commands from the `jumpbox` machine.
Generate a new SSH key:
```bash
ssh-keygen
```
### Kubernetes Public IP Address
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
```
gcloud compute addresses create kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region)
```text
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
```
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
Copy the SSH public key to each machine:
```
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
```bash
while read IP FQDN HOST SUBNET; do
ssh-copy-id root@${IP}
done < machines.txt
```
> output
Once each key is added, verify SSH public key access is working:
```
NAME REGION ADDRESS STATUS
kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
```bash
while read IP FQDN HOST SUBNET; do
ssh -n root@${IP} uname -o -m
done < machines.txt
```
## Compute Instances
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 16.04, which has good support for the [CRI-O container runtime](https://github.com/kubernetes-incubator/cri-o). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
### Kubernetes Controllers
Create three compute instances which will host the Kubernetes control plane:
```text
aarch64 GNU/Linux
aarch64 GNU/Linux
aarch64 GNU/Linux
```
for i in 0 1 2; do
gcloud compute instances create controller-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1604-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--private-network-ip 10.240.0.1${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,controller
## Hostnames
In this section you will assign hostnames to the `server`, `node-0`, and `node-1` machines. The hostname will be used when executing commands from the `jumpbox` to each machine. The hostname also play a major role within the cluster. Instead of Kubernetes clients using an IP address to issue commands to the Kubernetes API server, those client will use the `server` hostname instead. Hostnames are also used by each worker machine, `node-0` and `node-1` when registering with a given Kubernetes cluster.
To configure the hostname for each machine, run the following commands on the `jumpbox`.
Set the hostname on each machine listed in the `machines.txt` file:
```bash
while read IP FQDN HOST SUBNET; do
CMD="sed -i 's/^127.0.1.1.*/127.0.1.1\t${FQDN} ${HOST}/' /etc/hosts"
ssh -n root@${IP} "$CMD"
ssh -n root@${IP} hostnamectl hostname ${HOST}
done < machines.txt
```
Verify the hostname is set on each machine:
```bash
while read IP FQDN HOST SUBNET; do
ssh -n root@${IP} hostname --fqdn
done < machines.txt
```
```text
server.kubernetes.local
node-0.kubernetes.local
node-1.kubernetes.local
```
## DNS
In this section you will generate a DNS `hosts` file which will be appended to `jumpbox` local `/etc/hosts` file and to the `/etc/hosts` file of all three machines used for this tutorial. This will allow each machine to be reachable using a hostname such as `server`, `node-0`, or `node-1`.
Create a new `hosts` file and add a header to identify the machines being added:
```bash
echo "" > hosts
echo "# Kubernetes The Hard Way" >> hosts
```
Generate a DNS entry for each machine in the `machines.txt` file and append it to the `hosts` file:
```bash
while read IP FQDN HOST SUBNET; do
ENTRY="${IP} ${FQDN} ${HOST}"
echo $ENTRY >> hosts
done < machines.txt
```
Review the DNS entries in the `hosts` file:
```bash
cat hosts
```
```text
# Kubernetes The Hard Way
XXX.XXX.XXX.XXX server.kubernetes.local server
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1
```
## Adding DNS Entries To A Local Machine
In this section you will append the DNS entries from the `hosts` file to the local `/etc/hosts` file on your `jumpbox` machine.
Append the DNS entries from `hosts` to `/etc/hosts`:
```bash
cat hosts >> /etc/hosts
```
Verify that the `/etc/hosts` file has been updated:
```bash
cat /etc/hosts
```
```text
127.0.0.1 localhost
127.0.1.1 jumpbox
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
# Kubernetes The Hard Way
XXX.XXX.XXX.XXX server.kubernetes.local server
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1
```
At this point you should be able to SSH to each machine listed in the `machines.txt` file using a hostname.
```bash
for host in server node-0 node-1
do ssh root@${host} uname -o -m -n
done
```
### Kubernetes Workers
Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The `pod-cidr` instance metadata will be used to expose pod subnet allocations to compute instances at runtime.
> The Kubernetes cluster CIDR range is defined by the Controller Manager's `--cluster-cidr` flag. In this tutorial the cluster CIDR range will be set to `10.200.0.0/16`, which supports 254 subnets.
Create three compute instances which will host the Kubernetes worker nodes:
```
for i in 0 1 2; do
gcloud compute instances create worker-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-1604-lts \
--image-project ubuntu-os-cloud \
--machine-type n1-standard-1 \
--metadata pod-cidr=10.200.${i}.0/24 \
--private-network-ip 10.240.0.2${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,worker
done
```text
server aarch64 GNU/Linux
node-0 aarch64 GNU/Linux
node-1 aarch64 GNU/Linux
```
### Verification
## Adding DNS Entries To The Remote Machines
List the compute instances in your default compute zone:
In this section you will append the DNS entries from `hosts` to `/etc/hosts` on each machine listed in the `machines.txt` text file.
```
gcloud compute instances list
Copy the `hosts` file to each machine and append the contents to `/etc/hosts`:
```bash
while read IP FQDN HOST SUBNET; do
scp hosts root@${HOST}:~/
ssh -n \
root@${HOST} "cat hosts >> /etc/hosts"
done < machines.txt
```
> output
```
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING
controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING
controller-2 us-west1-c n1-standard-1 10.240.0.12 XX.XXX.XXX.XX RUNNING
worker-0 us-west1-c n1-standard-1 10.240.0.20 XXX.XXX.XXX.XX RUNNING
worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX RUNNING
worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING
```
At this point hostnames can be used when connecting to machines from your `jumpbox` machine, or any of the three machines in the Kubernetes cluster. Instead of using IP addresess you can now connect to machines using a hostname such as `server`, `node-0`, or `node-1`.
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)

View File

@@ -1,283 +1,108 @@
# Provisioning a CA and Generating TLS Certificates
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kubelet, and kube-proxy.
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using openssl to bootstrap a Certificate Authority, and generate TLS certificates for the following components: kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy. The commands in this section should be run from the `jumpbox`.
## Certificate Authority
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates for the other Kubernetes components. Setting up CA and generating certificates using `openssl` can be time-consuming, especially when doing it for the first time. To streamline this lab, I've included an openssl configuration file `ca.conf`, which defines all the details needed to generate certificates for each Kubernetes component.
Create the CA configuration file:
Take a moment to review the `ca.conf` configuration file:
```bash
cat ca.conf
```
cat > ca-config.json <<EOF
You don't need to understand everything in the `ca.conf` file to complete this tutorial, but you should consider it a starting point for learning `openssl` and the configuration that goes into managing certificates at a high level.
Every certificate authority starts with a private key and root certificate. In this section we are going to create a self-signed certificate authority, and while that's all we need for this tutorial, this shouldn't be considered something you would do in a real-world production level environment.
Generate the CA configuration file, certificate, and private key:
```bash
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
openssl genrsa -out ca.key 4096
openssl req -x509 -new -sha512 -noenc \
-key ca.key -days 3653 \
-config ca.conf \
-out ca.crt
}
EOF
```
Create the CA certificate signing request:
```
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
```
Generate the CA certificate and private key:
```
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
```
Results:
```
ca-key.pem
ca.pem
```txt
ca.crt ca.key
```
## Client and Server Certificates
## Create Client and Server Certificates
In this section you will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes `admin` user.
### The Admin Client Certificate
Generate the certificates and private keys:
Create the `admin` client certificate signing request:
```
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
```bash
certs=(
"admin" "node-0" "node-1"
"kube-proxy" "kube-scheduler"
"kube-controller-manager"
"kube-api-server"
"service-accounts"
)
```
Generate the `admin` client certificate and private key:
```bash
for i in ${certs[*]}; do
openssl genrsa -out "${i}.key" 4096
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
```
Results:
```
admin-key.pem
admin.pem
```
### The Kubelet Client Certificates
Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/admin/authorization/node/) called Node Authorizer, that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:<nodeName>`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
Generate a certificate and private key for each Kubernetes worker node:
```
for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
{
"CN": "system:node:${instance}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
INTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].networkIP)')
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
-profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance}
openssl req -new -key "${i}.key" -sha256 \
-config "ca.conf" -section ${i} \
-out "${i}.csr"
openssl x509 -req -days 3653 -in "${i}.csr" \
-copy_extensions copyall \
-sha256 -CA "ca.crt" \
-CAkey "ca.key" \
-CAcreateserial \
-out "${i}.crt"
done
```
Results:
The results of running the above command will generate a private key, certificate request, and signed SSL certificate for each of the Kubernetes components. You can list the generated files with the following command:
```
worker-0-key.pem
worker-0.pem
worker-1-key.pem
worker-1.pem
worker-2-key.pem
worker-2.pem
```
### The kube-proxy Client Certificate
Create the `kube-proxy` client certificate signing request:
```
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
```
Generate the `kube-proxy` client certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
```
Results:
```
kube-proxy-key.pem
kube-proxy.pem
```
### The Kubernetes API Server Certificate
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
Create the Kubernetes API Server certificate signing request:
```
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
```
Generate the Kubernetes API Server certificate and private key:
```
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
```
Results:
```
kubernetes-key.pem
kubernetes.pem
```bash
ls -1 *.crt *.key *.csr
```
## Distribute the Client and Server Certificates
Copy the appropriate certificates and private keys to each worker instance:
In this section you will copy the various certificates to each machine under a directory that each Kubernetes components will search for the certificate pair. In a real-world environment these certificates should be treated like a set of sensitive secrets as they are often used as credentials by the Kubernetes components to authenticate to each other.
```
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
Copy the appropriate certificates and private keys to the `node-0` and `node-1` machines:
```bash
for host in node-0 node-1; do
ssh root@$host mkdir /var/lib/kubelet/
scp ca.crt root@$host:/var/lib/kubelet/
scp $host.crt \
root@$host:/var/lib/kubelet/kubelet.crt
scp $host.key \
root@$host:/var/lib/kubelet/kubelet.key
done
```
Copy the appropriate certificates and private keys to each controller instance:
Copy the appropriate certificates and private keys to the `server` machine:
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem ${instance}:~/
done
```bash
scp \
ca.key ca.crt \
kube-api-server.key kube-api-server.crt \
service-accounts.key service-accounts.crt \
root@server:~/
```
> The `kube-proxy` and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)

View File

@@ -4,98 +4,207 @@ In this lab you will generate [Kubernetes configuration files](https://kubernete
## Client Authentication Configs
In this section you will generate kubeconfig files for the `kubelet` and `kube-proxy` clients.
> The `scheduler` and `controller manager` access the Kubernetes API Server locally over an insecure API port which does not require authentication. The Kubernetes API Server's insecure port is only enabled for local access.
### Kubernetes Public IP Address
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
In this section you will generate kubeconfig files for the `kubelet` and the `admin` user.
### The kubelet Kubernetes Configuration File
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
Generate a kubeconfig file for each worker node:
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
```
for instance in worker-0 worker-1 worker-2; do
Generate a kubeconfig file the node-0 worker node:
```bash
for host in node-0 node-1; do
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${instance}.kubeconfig
--server=https://server.kubernetes.local:6443 \
--kubeconfig=${host}.kubeconfig
kubectl config set-credentials system:node:${instance} \
--client-certificate=${instance}.pem \
--client-key=${instance}-key.pem \
kubectl config set-credentials system:node:${host} \
--client-certificate=${host}.crt \
--client-key=${host}.key \
--embed-certs=true \
--kubeconfig=${instance}.kubeconfig
--kubeconfig=${host}.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:${instance} \
--kubeconfig=${instance}.kubeconfig
--user=system:node:${host} \
--kubeconfig=${host}.kubeconfig
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
kubectl config use-context default \
--kubeconfig=${host}.kubeconfig
done
```
Results:
```
worker-0.kubeconfig
worker-1.kubeconfig
worker-2.kubeconfig
```text
node-0.kubeconfig
node-1.kubeconfig
```
### The kube-proxy Kubernetes Configuration File
Generate a kubeconfig file for the `kube-proxy` service:
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://server.kubernetes.local:6443 \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.crt \
--client-key=kube-proxy.key \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default \
--kubeconfig=kube-proxy.kubeconfig
}
```
```
kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
Results:
```text
kube-proxy.kubeconfig
```
```
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
### The kube-controller-manager Kubernetes Configuration File
Generate a kubeconfig file for the `kube-controller-manager` service:
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://server.kubernetes.local:6443 \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.crt \
--client-key=kube-controller-manager.key \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default \
--kubeconfig=kube-controller-manager.kubeconfig
}
```
Results:
```text
kube-controller-manager.kubeconfig
```
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
### The kube-scheduler Kubernetes Configuration File
Generate a kubeconfig file for the `kube-scheduler` service:
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://server.kubernetes.local:6443 \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.crt \
--client-key=kube-scheduler.key \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default \
--kubeconfig=kube-scheduler.kubeconfig
}
```
Results:
```text
kube-scheduler.kubeconfig
```
### The admin Kubernetes Configuration File
Generate a kubeconfig file for the `admin` user:
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.crt \
--client-key=admin.key \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default \
--kubeconfig=admin.kubeconfig
}
```
Results:
```text
admin.kubeconfig
```
## Distribute the Kubernetes Configuration Files
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
Copy the `kubelet` and `kube-proxy` kubeconfig files to the node-0 instance:
```
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
```bash
for host in node-0 node-1; do
ssh root@$host "mkdir /var/lib/{kube-proxy,kubelet}"
scp kube-proxy.kubeconfig \
root@$host:/var/lib/kube-proxy/kubeconfig \
scp ${host}.kubeconfig \
root@$host:/var/lib/kubelet/kubeconfig
done
```
Copy the `kube-controller-manager` and `kube-scheduler` kubeconfig files to the controller instance:
```bash
scp admin.kubeconfig \
kube-controller-manager.kubeconfig \
kube-scheduler.kubeconfig \
root@server:~/
```
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)

View File

@@ -8,36 +8,23 @@ In this lab you will generate an encryption key and an [encryption config](https
Generate an encryption key:
```
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
```bash
export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
```
## The Encryption Config File
Create the `encryption-config.yaml` encryption config file:
```
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
```bash
envsubst < configs/encryption-config.yaml \
> encryption-config.yaml
```
Copy the `encryption-config.yaml` encryption config file to each controller instance:
```
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/
done
```bash
scp encryption-config.yaml root@server:~/
```
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)

View File

@@ -1,128 +1,76 @@
# Bootstrapping the etcd Cluster
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/coreos/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/etcd-io/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
## Prerequisites
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
Copy `etcd` binaries and systemd unit files to the `server` instance:
```
gcloud compute ssh controller-0
```bash
scp \
downloads/etcd-v3.4.27-linux-arm64.tar.gz \
units/etcd.service \
root@server:~/
```
## Bootstrapping an etcd Cluster Member
### Download and Install the etcd Binaries
Download the official etcd release binaries from the [coreos/etcd](https://github.com/coreos/etcd) GitHub project:
The commands in this lab must be run on the `server` machine. Login to the `server` machine using the `ssh` command. Example:
```bash
ssh root@server
```
wget -q --show-progress --https-only --timestamping \
"https://github.com/coreos/etcd/releases/download/v3.2.6/etcd-v3.2.6-linux-amd64.tar.gz"
```
## Bootstrapping an etcd Cluster
### Install the etcd Binaries
Extract and install the `etcd` server and the `etcdctl` command line utility:
```
tar -xvf etcd-v3.2.6-linux-amd64.tar.gz
```
```
sudo mv etcd-v3.2.6-linux-amd64/etcd* /usr/local/bin/
```bash
{
tar -xvf etcd-v3.4.27-linux-arm64.tar.gz
mv etcd-v3.4.27-linux-arm64/etcd* /usr/local/bin/
}
```
### Configure the etcd Server
```
sudo mkdir -p /etc/etcd /var/lib/etcd
```
```
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
```
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
```bash
{
mkdir -p /etc/etcd /var/lib/etcd
chmod 700 /var/lib/etcd
cp ca.crt kube-api-server.key kube-api-server.crt \
/etc/etcd/
}
```
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
```
ETCD_NAME=$(hostname -s)
```
Create the `etcd.service` systemd unit file:
```
cat > etcd.service <<EOF
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,http://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```bash
mv etcd.service /etc/systemd/system/
```
### Start the etcd Server
```bash
{
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
}
```
sudo mv etcd.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable etcd
```
```
sudo systemctl start etcd
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
## Verification
List the etcd cluster members:
```
ETCDCTL_API=3 etcdctl member list
```bash
etcdctl member list
```
> output
```
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379
```text
6702b0a34e2cfd39, started, controller, http://127.0.0.1:2380, http://127.0.0.1:2379, false
```
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)

View File

@@ -1,264 +1,179 @@
# Bootstrapping the Kubernetes Control Plane
In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
In this lab you will bootstrap the Kubernetes control plane. The following components will be installed the controller machine: Kubernetes API Server, Scheduler, and Controller Manager.
## Prerequisites
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
Copy Kubernetes binaries and systemd unit files to the `server` instance:
```bash
scp \
downloads/kube-apiserver \
downloads/kube-controller-manager \
downloads/kube-scheduler \
downloads/kubectl \
units/kube-apiserver.service \
units/kube-controller-manager.service \
units/kube-scheduler.service \
configs/kube-scheduler.yaml \
configs/kube-apiserver-to-kubelet.yaml \
root@server:~/
```
gcloud compute ssh controller-0
The commands in this lab must be run on the controller instance: `server`. Login to the controller instance using the `ssh` command. Example:
```bash
ssh root@server
```
## Provision the Kubernetes Control Plane
### Download and Install the Kubernetes Controller Binaries
Create the Kubernetes configuration directory:
Download the official Kubernetes release binaries:
```bash
mkdir -p /etc/kubernetes/config
```
```
wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl"
```
### Install the Kubernetes Controller Binaries
Install the Kubernetes binaries:
```
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
```
```
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
```bash
{
chmod +x kube-apiserver \
kube-controller-manager \
kube-scheduler kubectl
mv kube-apiserver \
kube-controller-manager \
kube-scheduler kubectl \
/usr/local/bin/
}
```
### Configure the Kubernetes API Server
```
sudo mkdir -p /var/lib/kubernetes/
```
```bash
{
mkdir -p /var/lib/kubernetes/
```
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem encryption-config.yaml /var/lib/kubernetes/
```
The instance internal IP address will be used advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
mv ca.crt ca.key \
kube-api-server.key kube-api-server.crt \
service-accounts.key service-accounts.crt \
encryption-config.yaml \
/var/lib/kubernetes/
}
```
Create the `kube-apiserver.service` systemd unit file:
```
cat > kube-apiserver.service <<EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--admission-control=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.pem \\
--enable-swagger-ui=true \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
--event-ttl=1h \\
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--insecure-bind-address=0.0.0.0 \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
--kubelet-https=true \\
--runtime-config=rbac.authorization.k8s.io/v1alpha1 \\
--service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-ca-file=/var/lib/kubernetes/ca.pem \\
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```bash
mv kube-apiserver.service \
/etc/systemd/system/kube-apiserver.service
```
### Configure the Kubernetes Controller Manager
Move the `kube-controller-manager` kubeconfig into place:
```bash
mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
```
Create the `kube-controller-manager.service` systemd unit file:
```
cat > kube-controller-manager.service <<EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--leader-elect=true \\
--master=http://${INTERNAL_IP}:8080 \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/ca-key.pem \\
--service-cluster-ip-range=10.32.0.0/16 \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```bash
mv kube-controller-manager.service /etc/systemd/system/
```
### Configure the Kubernetes Scheduler
Move the `kube-scheduler` kubeconfig into place:
```bash
mv kube-scheduler.kubeconfig /var/lib/kubernetes/
```
Create the `kube-scheduler.yaml` configuration file:
```bash
mv kube-scheduler.yaml /etc/kubernetes/config/
```
Create the `kube-scheduler.service` systemd unit file:
```
cat > kube-scheduler.service <<EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--leader-elect=true \\
--master=http://${INTERNAL_IP}:8080 \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```bash
mv kube-scheduler.service /etc/systemd/system/
```
### Start the Controller Services
```
sudo mv kube-apiserver.service kube-scheduler.service kube-controller-manager.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
```
```
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
```bash
{
systemctl daemon-reload
systemctl enable kube-apiserver \
kube-controller-manager kube-scheduler
systemctl start kube-apiserver \
kube-controller-manager kube-scheduler
}
```
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
### Verification
```
kubectl get componentstatuses
```bash
kubectl cluster-info \
--kubeconfig admin.kubeconfig
```
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
```text
Kubernetes control plane is running at https://127.0.0.1:6443
```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
## RBAC for Kubelet Authorization
## The Kubernetes Frontend Load Balancer
In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer.
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
The commands in this section will affect the entire cluster and only need to be run on the controller node.
Create the external load balancer network resources:
```
gcloud compute http-health-checks create kube-apiserver-health-check \
--description "Kubernetes API Server Health Check" \
--port 8080 \
--request-path /healthz
```bash
ssh root@server
```
```
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check=kube-apiserver-health-check
```
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
```
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
```
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(name)')
```
```
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
```bash
kubectl apply -f kube-apiserver-to-kubelet.yaml \
--kubeconfig admin.kubeconfig
```
### Verification
Retrieve the `kubernetes-the-hard-way` static IP address:
```
KUBERNETES_PUBLIC_IP_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```
At this point the Kubernetes control plane is up and running. Run the following commands from the `jumpbox` machine to verify it's working:
Make a HTTP request for the Kubernetes version info:
```
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_IP_ADDRESS}:6443/version
```bash
curl -k --cacert ca.crt https://server.kubernetes.local:6443/version
```
> output
```
```text
{
"major": "1",
"minor": "7",
"gitVersion": "v1.7.4",
"gitCommit": "793658f2d7ca7f064d2bdf606519f9fe1229c381",
"minor": "28",
"gitVersion": "v1.28.3",
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
"gitTreeState": "clean",
"buildDate": "2017-08-17T08:30:51Z",
"goVersion": "go1.8.3",
"buildDate": "2023-10-18T11:33:18Z",
"goVersion": "go1.20.10",
"compiler": "gc",
"platform": "linux/amd64"
"platform": "linux/arm64"
}
```

View File

@@ -1,56 +1,89 @@
# Bootstrapping the Kubernetes Worker Nodes
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [cri-o](https://github.com/kubernetes-incubator/cri-o), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
In this lab you will bootstrap two Kubernetes worker nodes. The following components will be installed: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
## Prerequisites
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
Copy Kubernetes binaries and systemd unit files to each worker instance:
```bash
for host in node-0 node-1; do
SUBNET=$(grep $host machines.txt | cut -d " " -f 4)
sed "s|SUBNET|$SUBNET|g" \
configs/10-bridge.conf > 10-bridge.conf
sed "s|SUBNET|$SUBNET|g" \
configs/kubelet-config.yaml > kubelet-config.yaml
scp 10-bridge.conf kubelet-config.yaml \
root@$host:~/
done
```
gcloud compute ssh worker-0
```bash
for host in node-0 node-1; do
scp \
downloads/runc.arm64 \
downloads/crictl-v1.28.0-linux-arm.tar.gz \
downloads/cni-plugins-linux-arm64-v1.3.0.tgz \
downloads/containerd-1.7.8-linux-arm64.tar.gz \
downloads/kubectl \
downloads/kubelet \
downloads/kube-proxy \
configs/99-loopback.conf \
configs/containerd-config.toml \
configs/kubelet-config.yaml \
configs/kube-proxy-config.yaml \
units/containerd.service \
units/kubelet.service \
units/kube-proxy.service \
root@$host:~/
done
```
The commands in this lab must be run on each worker instance: `node-0`, `node-1`. Login to the worker instance using the `ssh` command. Example:
```bash
ssh root@node-0
```
## Provisioning a Kubernetes Worker Node
### Install the cri-o OS Dependencies
Install the OS dependencies:
Add the `alexlarsson/flatpak` [PPA](https://launchpad.net/ubuntu/+ppas) which hosts the `libostree` package:
```
sudo add-apt-repository -y ppa:alexlarsson/flatpak
```bash
{
apt-get update
apt-get -y install socat conntrack ipset
}
```
```
sudo apt-get update
> The socat binary enables support for the `kubectl port-forward` command.
### Disable Swap
By default, the kubelet will fail to start if [swap](https://help.ubuntu.com/community/SwapFaq) is enabled. It is [recommended](https://github.com/kubernetes/kubernetes/issues/7294) that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.
Verify if swap is enabled:
```bash
swapon --show
```
Install the OS dependencies required by the cri-o container runtime:
If output is empty then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
```
sudo apt-get install -y socat libgpgme11 libostree-1-1
```bash
swapoff -a
```
### Download and Install Worker Binaries
```
wget -q --show-progress --https-only --timestamping \
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc4/runc.amd64 \
https://storage.googleapis.com/kubernetes-the-hard-way/crio-amd64-v1.0.0-beta.0.tar.gz \
https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubelet
```
> To ensure swap remains off after reboot consult your Linux distro documentation.
Create the installation directories:
```
sudo mkdir -p \
/etc/containers \
```bash
mkdir -p \
/etc/cni/net.d \
/etc/crio \
/opt/cni/bin \
/usr/local/libexec/crio \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
@@ -59,226 +92,85 @@ sudo mkdir -p \
Install the worker binaries:
```bash
{
mkdir -p containerd
tar -xvf crictl-v1.28.0-linux-arm.tar.gz
tar -xvf containerd-1.7.8-linux-arm64.tar.gz -C containerd
tar -xvf cni-plugins-linux-arm64-v1.3.0.tgz -C /opt/cni/bin/
mv runc.arm64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
mv containerd/bin/* /bin/
}
```
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
```
```
tar -xvf crio-amd64-v1.0.0-beta.0.tar.gz
```
```
chmod +x kubectl kube-proxy kubelet runc.amd64
```
```
sudo mv runc.amd64 /usr/local/bin/runc
```
```
sudo mv crio crioctl kpod kubectl kube-proxy kubelet /usr/local/bin/
```
```
sudo mv conmon pause /usr/local/libexec/crio/
```
### Configure CNI Networking
Retrieve the Pod CIDR range for the current compute instance:
```
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
```
Create the `bridge` network configuration file:
```bash
mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
```
cat > 10-bridge.conf <<EOF
### Configure containerd
Install the `containerd` configuration files:
```bash
{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "${POD_CIDR}"}]
],
"routes": [{"dst": "0.0.0.0/0"}]
}
mkdir -p /etc/containerd/
mv containerd-config.toml /etc/containerd/config.toml
mv containerd.service /etc/systemd/system/
}
EOF
```
Create the `loopback` network configuration file:
```
cat > 99-loopback.conf <<EOF
{
"cniVersion": "0.3.1",
"type": "loopback"
}
EOF
```
Move the network configuration files to the CNI configuration directory:
```
sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
```
### Configure the CRI-O Container Runtime
```
sudo mv crio.conf seccomp.json /etc/crio/
```
```
sudo mv policy.json /etc/containers/
```
```
cat > crio.service <<EOF
[Unit]
Description=CRI-O daemon
Documentation=https://github.com/kubernetes-incubator/cri-o
[Service]
ExecStart=/usr/local/bin/crio
Restart=always
RestartSec=10s
[Install]
WantedBy=multi-user.target
EOF
```
### Configure the Kubelet
```
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
```
Create the `kubelet-config.yaml` configuration file:
```
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
```
```
sudo mv ca.pem /var/lib/kubernetes/
```
Create the `kubelet.service` systemd unit file:
```
cat > kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=crio.service
Requires=crio.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--allow-privileged=true \\
--cluster-dns=10.32.0.10 \\
--cluster-domain=cluster.local \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/crio.sock \\
--enable-custom-metrics \\
--image-pull-progress-deadline=2m \\
--image-service-endpoint=unix:///var/run/crio.sock \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--pod-cidr=${POD_CIDR} \\
--register-node=true \\
--require-kubeconfig \\
--runtime-request-timeout=10m \\
--tls-cert-file=/var/lib/kubelet/${HOSTNAME}.pem \\
--tls-private-key-file=/var/lib/kubelet/${HOSTNAME}-key.pem \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```bash
{
mv kubelet-config.yaml /var/lib/kubelet/
mv kubelet.service /etc/systemd/system/
}
```
### Configure the Kubernetes Proxy
```
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
```
Create the `kube-proxy.service` systemd unit file:
```
cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--cluster-cidr=10.200.0.0/16 \\
--kubeconfig=/var/lib/kube-proxy/kubeconfig \\
--proxy-mode=iptables \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```bash
{
mv kube-proxy-config.yaml /var/lib/kube-proxy/
mv kube-proxy.service /etc/systemd/system/
}
```
### Start the Worker Services
```bash
{
systemctl daemon-reload
systemctl enable containerd kubelet kube-proxy
systemctl start containerd kubelet kube-proxy
}
```
sudo mv crio.service kubelet.service kube-proxy.service /etc/systemd/system/
```
```
sudo systemctl daemon-reload
```
```
sudo systemctl enable crio kubelet kube-proxy
```
```
sudo systemctl start crio kubelet kube-proxy
```
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
## Verification
Login to one of the controller nodes:
```
gcloud compute ssh controller-0
```
The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the `jumpbox` machine.
List the registered Kubernetes nodes:
```
kubectl get nodes
```bash
ssh root@server \
"kubectl get nodes \
--kubeconfig admin.kubeconfig"
```
> output
```
NAME STATUS AGE VERSION
worker-0 Ready 5m v1.7.4
worker-1 Ready 3m v1.7.4
worker-2 Ready 7s v1.7.4
NAME STATUS ROLES AGE VERSION
node-0 Ready <none> 1m v1.28.3
node-1 Ready <none> 10s v1.28.3
```
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)

View File

@@ -2,77 +2,80 @@
In this lab you will generate a kubeconfig file for the `kubectl` command line utility based on the `admin` user credentials.
> Run the commands in this lab from the same directory used to generate the admin client certificates.
> Run the commands in this lab from the `jumpbox` machine.
## The Admin Kubernetes Configuration File
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
Each kubeconfig requires a Kubernetes API Server to connect to.
Retrieve the `kubernetes-the-hard-way` static IP address:
You should be able to ping `server.kubernetes.local` based on the `/etc/hosts` DNS entry from a previous lap.
```bash
curl -k --cacert ca.crt \
https://server.kubernetes.local:6443/version
```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
```text
{
"major": "1",
"minor": "28",
"gitVersion": "v1.28.3",
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
"gitTreeState": "clean",
"buildDate": "2023-10-18T11:33:18Z",
"goVersion": "go1.20.10",
"compiler": "gc",
"platform": "linux/arm64"
}
```
Generate a kubeconfig file suitable for authenticating as the `admin` user:
```
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
```
```bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://server.kubernetes.local:6443
```
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
```
kubectl config set-credentials admin \
--client-certificate=admin.crt \
--client-key=admin.key
```
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
```
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
}
```
kubectl config use-context kubernetes-the-hard-way
```
The results of running the command above should create a kubeconfig file in the default location `~/.kube/config` used by the `kubectl` commandline tool. This also means you can run the `kubectl` command without specifying a config.
## Verification
Check the health of the remote Kubernetes cluster:
Check the version of the remote Kubernetes cluster:
```
kubectl get componentstatuses
```bash
kubectl version
```
> output
```
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
```text
Client Version: v1.28.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.3
```
List the nodes in the remote Kubernetes cluster:
```
```bash
kubectl get nodes
```
> output
```
NAME STATUS AGE VERSION
worker-0 Ready 7m v1.7.4
worker-1 Ready 4m v1.7.4
worker-2 Ready 1m v1.7.4
NAME STATUS ROLES AGE VERSION
node-0 Ready <none> 30m v1.28.3
node-1 Ready <none> 35m v1.28.3
```
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)

View File

@@ -12,49 +12,67 @@ In this section you will gather the information required to create routes in the
Print the internal IP address and Pod CIDR range for each worker instance:
```
for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
done
```bash
{
SERVER_IP=$(grep server machines.txt | cut -d " " -f 1)
NODE_0_IP=$(grep node-0 machines.txt | cut -d " " -f 1)
NODE_0_SUBNET=$(grep node-0 machines.txt | cut -d " " -f 4)
NODE_1_IP=$(grep node-1 machines.txt | cut -d " " -f 1)
NODE_1_SUBNET=$(grep node-1 machines.txt | cut -d " " -f 4)
}
```
> output
```
10.240.0.20 10.200.0.0/24
10.240.0.21 10.200.1.0/24
10.240.0.22 10.200.2.0/24
```bash
ssh root@server <<EOF
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
EOF
```
## Routes
Create network routes for each worker instance:
```
for i in 0 1 2; do
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
--network kubernetes-the-hard-way \
--next-hop-address 10.240.0.2${i} \
--destination-range 10.200.${i}.0/24
done
```bash
ssh root@node-0 <<EOF
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
EOF
```
List the routes in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute routes list --filter "network kubernetes-the-hard-way"
```bash
ssh root@node-1 <<EOF
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
EOF
```
> output
## Verification
```
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
default-route-77bcc6bee33b5535 kubernetes-the-hard-way 10.240.0.0/24 1000
default-route-b11fc914b626974d kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
```bash
ssh root@server ip route
```
Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)
```text
default via XXX.XXX.XXX.XXX dev ens160
10.200.0.0/24 via XXX.XXX.XXX.XXX dev ens160
10.200.1.0/24 via XXX.XXX.XXX.XXX dev ens160
XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
```
```bash
ssh root@node-0 ip route
```
```text
default via XXX.XXX.XXX.XXX dev ens160
10.200.1.0/24 via XXX.XXX.XXX.XXX dev ens160
XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
```
```bash
ssh root@node-1 ip route
```
```text
default via XXX.XXX.XXX.XXX dev ens160
10.200.0.0/24 via XXX.XXX.XXX.XXX dev ens160
XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
```
Next: [Smoke Test](12-smoke-test.md)

View File

@@ -1,79 +0,0 @@
# Deploying the DNS Cluster Add-on
In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery to applications running inside the Kubernetes cluster.
## The DNS Cluster Add-on
Deploy the `kube-dns` cluster add-on:
```
kubectl create -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml
```
> output
```
serviceaccount "kube-dns" created
configmap "kube-dns" created
service "kube-dns" created
deployment "kube-dns" created
```
List the pods created by the `kube-dns` deployment:
```
kubectl get pods -l k8s-app=kube-dns -n kube-system
```
> output
```
NAME READY STATUS RESTARTS AGE
kube-dns-3097350089-gq015 3/3 Running 0 20s
kube-dns-3097350089-q64qc 3/3 Running 0 20s
```
## Verification
Create a `busybox` deployment:
```
kubectl run busybox --image=busybox --command -- sleep 3600
```
List the pod created by the `busybox` deployment:
```
kubectl get pods -l run=busybox
```
> output
```
NAME READY STATUS RESTARTS AGE
busybox-2125412808-mt2vb 1/1 Running 0 15s
```
Retrieve the full name of the `busybox` pod:
```
POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
```
Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
```
kubectl exec -ti $POD_NAME -- nslookup kubernetes
```
> output
```
Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
```
Next: [Smoke Test](13-smoke-test.md)

View File

@@ -8,38 +8,42 @@ In this section you will verify the ability to [encrypt secret data at rest](htt
Create a generic secret:
```
```bash
kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata"
```
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
```
gcloud compute ssh controller-0 \
--command "ETCDCTL_API=3 etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
```bash
ssh root@server \
'etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C'
```
> output
```
```text
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a 70 88 d8 52 83 b7 96 |:v1:key1:p..R...|
00000050 04 a3 bd 7e 42 9e 8a 77 2f 97 24 a7 68 3f c5 ec |...~B..w/.$.h?..|
00000060 9e f7 66 e8 a3 81 fc c8 3c df 63 71 33 0a 87 8f |..f.....<.cq3...|
00000070 0e c7 0a 0a f2 04 46 85 33 92 9a 4b 61 b2 10 c0 |......F.3..Ka...|
00000080 0b 00 05 dd c3 c2 d0 6b ff ff f2 32 3b e0 ec a0 |.......k...2;...|
00000090 63 d3 8b 1c 29 84 88 71 a7 88 e2 26 4b 65 95 14 |c...)..q...&Ke..|
000000a0 dc 8d 59 63 11 e5 f3 4e b4 94 cc 3d 75 52 c7 07 |..Yc...N...=uR..|
000000b0 73 f5 b4 b0 63 aa f9 9d 29 f8 d6 88 aa 33 c4 24 |s...c...)....3.$|
000000c0 ac c6 71 2b 45 98 9e 5f c6 a4 9d a2 26 3c 24 41 |..q+E.._....&<$A|
000000d0 95 5b d3 2c 4b 1e 4a 47 c8 47 c8 f3 ac d6 e8 cb |.[.,K.JG.G......|
000000e0 5f a9 09 93 91 d7 5d c9 c2 68 f8 cf 3c 7e 3b a3 |_.....]..h..<~;.|
000000f0 db d8 d5 9e 0c bf 2a 2f 58 0a |......*/X.|
000000fa
00000040 3a 76 31 3a 6b 65 79 31 3a 9b 79 a5 b9 49 a2 77 |:v1:key1:.y..I.w|
00000050 c0 6a c9 12 7c b4 c7 c4 64 41 37 97 4a 83 a9 c1 |.j..|...dA7.J...|
00000060 4f 14 ae 73 ab b8 38 26 11 14 0a 40 b8 f3 0e 0a |O..s..8&...@....|
00000070 f5 a7 a2 2c b6 35 b1 83 22 15 aa d0 dd 25 11 3e |...,.5.."....%.>|
00000080 c4 e9 69 1c 10 7a 9d f7 dc 22 28 89 2c 83 dd 0b |..i..z..."(.,...|
00000090 a4 5f 3a 93 0f ff 1f f8 bc 97 43 0e e5 05 5d f9 |._:.......C...].|
000000a0 ef 88 02 80 49 81 f1 58 b0 48 39 19 14 e1 b1 34 |....I..X.H9....4|
000000b0 f6 b0 9b 0a 9c 53 27 2b 23 b9 e6 52 b4 96 81 70 |.....S'+#..R...p|
000000c0 a7 b6 7b 4f 44 d4 9c 07 51 a3 1b 22 96 4c 24 6c |..{OD...Q..".L$l|
000000d0 44 6c db 53 f5 31 e6 3f 15 7b 4c 23 06 c1 37 73 |Dl.S.1.?.{L#..7s|
000000e0 e1 97 8e 4e 1a 2e 2c 1a da 85 c3 ff 42 92 d0 f1 |...N..,.....B...|
000000f0 87 b8 39 89 e8 46 2e b3 56 68 41 b8 1e 29 3d ba |..9..F..VhA..)=.|
00000100 dd d8 27 4c 7f d5 fe 97 3c a3 92 e9 3d ae 47 ee |..'L....<...=.G.|
00000110 24 6a 0b 7c ac b8 28 e6 25 a6 ce 04 80 ee c2 eb |$j.|..(.%.......|
00000120 4c 86 fa 70 66 13 63 59 03 c2 70 57 8b fb a1 d6 |L..pf.cY..pW....|
00000130 f2 58 08 84 43 f3 70 7f ad d8 30 63 3e ef ff b6 |.X..C.p...0c>...|
00000140 b2 06 c3 45 c5 d8 89 d3 47 4a 72 ca 20 9b cf b5 |...E....GJr. ...|
00000150 4b 3d 6d b4 58 ae 42 4b 7f 0a |K=m.X.BK..|
0000015a
```
The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
@@ -50,21 +54,20 @@ In this section you will verify the ability to create and manage [Deployments](h
Create a deployment for the [nginx](https://nginx.org/en/) web server:
```
kubectl run nginx --image=nginx
```bash
kubectl create deployment nginx \
--image=nginx:latest
```
List the pod created by the `nginx` deployment:
```
kubectl get pods -l run=nginx
```bash
kubectl get pods -l app=nginx
```
> output
```
NAME READY STATUS RESTARTS AGE
nginx-4217019353-b5gzn 1/1 Running 0 15s
```bash
NAME READY STATUS RESTARTS AGE
nginx-56fcf95486-c8dnx 1/1 Running 0 8s
```
### Port Forwarding
@@ -73,46 +76,44 @@ In this section you will verify the ability to access applications remotely usin
Retrieve the full name of the `nginx` pod:
```
POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
```bash
POD_NAME=$(kubectl get pods -l app=nginx \
-o jsonpath="{.items[0].metadata.name}")
```
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
```
```bash
kubectl port-forward $POD_NAME 8080:80
```
> output
```
```text
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
```
In a new terminal make an HTTP request using the forwarding address:
```
```bash
curl --head http://127.0.0.1:8080
```
> output
```
```text
HTTP/1.1 200 OK
Server: nginx/1.13.3
Date: Thu, 31 Aug 2017 01:58:15 GMT
Server: nginx/1.25.3
Date: Sun, 29 Oct 2023 01:44:32 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 11 Jul 2017 13:06:07 GMT
Content-Length: 615
Last-Modified: Tue, 24 Oct 2023 13:46:47 GMT
Connection: keep-alive
ETag: "5964cd3f-264"
ETag: "6537cac7-267"
Accept-Ranges: bytes
```
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
```
```text
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
@@ -125,14 +126,13 @@ In this section you will verify the ability to [retrieve container logs](https:/
Print the `nginx` pod logs:
```
```bash
kubectl logs $POD_NAME
```
> output
```
127.0.0.1 - - [31/Aug/2017:01:58:15 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"
```text
...
127.0.0.1 - - [01/Nov/2023:06:10:17 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.88.1" "-"
```
### Exec
@@ -141,14 +141,12 @@ In this section you will verify the ability to [execute commands in a container]
Print the nginx version by executing the `nginx -v` command in the `nginx` container:
```
```bash
kubectl exec -ti $POD_NAME -- nginx -v
```
> output
```
nginx version: nginx/1.13.3
```text
nginx version: nginx/1.25.3
```
## Services
@@ -157,52 +155,36 @@ In this section you will verify the ability to expose applications using a [Serv
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
```
kubectl expose deployment nginx --port 80 --type NodePort
```bash
kubectl expose deployment nginx \
--port 80 --type NodePort
```
> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
Retrieve the node port assigned to the `nginx` service:
```
```bash
NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
```
Create a firewall rule that allows remote access to the `nginx` node port:
Make an HTTP request using the IP address and the `nginx` node port:
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \
--network kubernetes-the-hard-way
```bash
curl -I http://node-0:${NODE_PORT}
```
Retrieve the external IP address of a worker instance:
```
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
```
Make an HTTP request using the external IP address and the `nginx` node port:
```
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
```
> output
```
```text
HTTP/1.1 200 OK
Server: nginx/1.13.3
Date: Thu, 31 Aug 2017 02:00:21 GMT
Server: nginx/1.25.3
Date: Sun, 29 Oct 2023 05:11:15 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 11 Jul 2017 13:06:07 GMT
Content-Length: 615
Last-Modified: Tue, 24 Oct 2023 13:46:47 GMT
Connection: keep-alive
ETag: "5964cd3f-264"
ETag: "6537cac7-267"
Accept-Ranges: bytes
```
Next: [Cleaning Up](14-cleanup.md)
Next: [Cleaning Up](13-cleanup.md)

7
docs/13-cleanup.md Normal file
View File

@@ -0,0 +1,7 @@
# Cleaning Up
In this lab you will delete the compute resources created during this tutorial.
## Compute Instances
Delete the controller and worker compute instances.

View File

@@ -1,67 +0,0 @@
# Cleaning Up
In this labs you will delete the compute resources created during this tutorial.
## Compute Instances
Delete the controller and worker compute instances:
```
gcloud -q compute instances delete \
controller-0 controller-1 controller-2 \
worker-0 worker-1 worker-2
```
## Networking
Delete the external load balancer network resources:
```
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region)
```
```
gcloud -q compute target-pools delete kubernetes-target-pool
```
```
gcloud -q compute http-health-checks delete kube-apiserver-health-check
```
Delete the `kubernetes-the-hard-way` static IP address:
```
gcloud -q compute addresses delete kubernetes-the-hard-way
```
Delete the `kubernetes-the-hard-way` firewall rules:
```
gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \
kubernetes-the-hard-way-allow-external \
kubernetes-the-hard-way-allow-health-checks
```
Delete the Pod network routes:
```
gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
```
Delete the `kubernetes` subnet:
```
gcloud -q compute networks subnets delete kubernetes
```
Delete the `kubernetes-the-hard-way` network VPC:
```
gcloud -q compute networks delete kubernetes-the-hard-way
```

11
downloads.txt Normal file
View File

@@ -0,0 +1,11 @@
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kubectl
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-apiserver
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-controller-manager
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-scheduler
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.28.0/crictl-v1.28.0-linux-arm.tar.gz
https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.arm64
https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz
https://github.com/containerd/containerd/releases/download/v1.7.8/containerd-1.7.8-linux-arm64.tar.gz
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-proxy
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kubelet
https://github.com/etcd-io/etcd/releases/download/v3.4.27/etcd-v3.4.27-linux-arm64.tar.gz

19
units/containerd.service Normal file
View File

@@ -0,0 +1,19 @@
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target
[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target

22
units/etcd.service Normal file
View File

@@ -0,0 +1,22 @@
[Unit]
Description=etcd
Documentation=https://github.com/etcd-io/etcd
[Service]
Type=notify
Environment="ETCD_UNSUPPORTED_ARCH=arm64"
ExecStart=/usr/local/bin/etcd \
--name controller \
--initial-advertise-peer-urls http://127.0.0.1:2380 \
--listen-peer-urls http://127.0.0.1:2380 \
--listen-client-urls http://127.0.0.1:2379 \
--advertise-client-urls http://127.0.0.1:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster controller=http://127.0.0.1:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,36 @@
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--allow-privileged=true \
--apiserver-count=1 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/audit.log \
--authorization-mode=Node,RBAC \
--bind-address=0.0.0.0 \
--client-ca-file=/var/lib/kubernetes/ca.crt \
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--etcd-servers=http://127.0.0.1:2379 \
--event-ttl=1h \
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
--kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \
--kubelet-client-certificate=/var/lib/kubernetes/kube-api-server.crt \
--kubelet-client-key=/var/lib/kubernetes/kube-api-server.key \
--runtime-config='api/all=true' \
--service-account-key-file=/var/lib/kubernetes/service-accounts.crt \
--service-account-signing-key-file=/var/lib/kubernetes/service-accounts.key \
--service-account-issuer=https://server.kubernetes.local:6443 \
--service-cluster-ip-range=10.32.0.0/24 \
--service-node-port-range=30000-32767 \
--tls-cert-file=/var/lib/kubernetes/kube-api-server.crt \
--tls-private-key-file=/var/lib/kubernetes/kube-api-server.key \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,22 @@
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--bind-address=0.0.0.0 \
--cluster-cidr=10.200.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \
--cluster-signing-key-file=/var/lib/kubernetes/ca.key \
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
--root-ca-file=/var/lib/kubernetes/ca.crt \
--service-account-private-key-file=/var/lib/kubernetes/service-accounts.key \
--service-cluster-ip-range=10.32.0.0/24 \
--use-service-account-credentials=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

12
units/kube-proxy.service Normal file
View File

@@ -0,0 +1,12 @@
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,13 @@
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--config=/etc/kubernetes/config/kube-scheduler.yaml \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target

17
units/kubelet.service Normal file
View File

@@ -0,0 +1,17 @@
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \
--config=/var/lib/kubelet/kubelet-config.yaml \
--kubeconfig=/var/lib/kubelet/kubeconfig \
--register-node=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target