Merge branch 'master' into master
commit
b723638070
|
@ -48,3 +48,4 @@ service-account.csr
|
||||||
service-account.pem
|
service-account.pem
|
||||||
service-account-csr.json
|
service-account-csr.json
|
||||||
*.swp
|
*.swp
|
||||||
|
.idea/
|
29
README.md
29
README.md
|
@ -1,8 +1,6 @@
|
||||||
# Kubernetes The Hard Way
|
# Kubernetes The Hard Way
|
||||||
|
|
||||||
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](https://kubernetes.io/docs/setup).
|
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for someone looking for a fully automated tool to bring up a Kubernetes cluster. Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
|
||||||
|
|
||||||
Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
|
|
||||||
|
|
||||||
> The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!
|
> The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!
|
||||||
|
|
||||||
|
@ -13,24 +11,25 @@ Kubernetes The Hard Way is optimized for learning, which means taking the long r
|
||||||
|
|
||||||
## Target Audience
|
## Target Audience
|
||||||
|
|
||||||
The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.
|
The target audience for this tutorial is someone who wants to understand the fundamentals of Kubernetes and how the core components fit together.
|
||||||
|
|
||||||
## Cluster Details
|
## Cluster Details
|
||||||
|
|
||||||
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
|
Kubernetes The Hard Way guides you through bootstrapping a basic Kubernetes cluster with all control plane components running on a single node, and two worker nodes, which is enough to learn the core concepts.
|
||||||
|
|
||||||
* [kubernetes](https://github.com/kubernetes/kubernetes) v1.21.0
|
Component versions:
|
||||||
* [containerd](https://github.com/containerd/containerd) v1.4.4
|
|
||||||
* [coredns](https://github.com/coredns/coredns) v1.8.3
|
* [kubernetes](https://github.com/kubernetes/kubernetes) v1.28.x
|
||||||
* [cni](https://github.com/containernetworking/cni) v0.9.1
|
* [containerd](https://github.com/containerd/containerd) v1.7.x
|
||||||
* [etcd](https://github.com/etcd-io/etcd) v3.4.15
|
* [cni](https://github.com/containernetworking/cni) v1.3.x
|
||||||
|
* [etcd](https://github.com/etcd-io/etcd) v3.4.x
|
||||||
|
|
||||||
## Labs
|
## Labs
|
||||||
|
|
||||||
This tutorial assumes you have access to the [Google Cloud Platform](https://cloud.google.com). While GCP is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms.
|
This tutorial requires four (4) ARM64 based virtual or physical machines connected to the same network. While ARM64 based machines are used for the tutorial, the lessons learned can be applied to other platforms.
|
||||||
|
|
||||||
* [Prerequisites](docs/01-prerequisites.md)
|
* [Prerequisites](docs/01-prerequisites.md)
|
||||||
* [Installing the Client Tools](docs/02-client-tools.md)
|
* [Setting up the Jumpbox](docs/02-jumpbox.md)
|
||||||
* [Provisioning Compute Resources](docs/03-compute-resources.md)
|
* [Provisioning Compute Resources](docs/03-compute-resources.md)
|
||||||
* [Provisioning the CA and Generating TLS Certificates](docs/04-certificate-authority.md)
|
* [Provisioning the CA and Generating TLS Certificates](docs/04-certificate-authority.md)
|
||||||
* [Generating Kubernetes Configuration Files for Authentication](docs/05-kubernetes-configuration-files.md)
|
* [Generating Kubernetes Configuration Files for Authentication](docs/05-kubernetes-configuration-files.md)
|
||||||
|
@ -40,9 +39,8 @@ This tutorial assumes you have access to the [Google Cloud Platform](https://clo
|
||||||
* [Bootstrapping the Kubernetes Worker Nodes](docs/09-bootstrapping-kubernetes-workers.md)
|
* [Bootstrapping the Kubernetes Worker Nodes](docs/09-bootstrapping-kubernetes-workers.md)
|
||||||
* [Configuring kubectl for Remote Access](docs/10-configuring-kubectl.md)
|
* [Configuring kubectl for Remote Access](docs/10-configuring-kubectl.md)
|
||||||
* [Provisioning Pod Network Routes](docs/11-pod-network-routes.md)
|
* [Provisioning Pod Network Routes](docs/11-pod-network-routes.md)
|
||||||
* [Deploying the DNS Cluster Add-on](docs/12-dns-addon.md)
|
* [Smoke Test](docs/12-smoke-test.md)
|
||||||
* [Smoke Test](docs/13-smoke-test.md)
|
* [Cleaning Up](docs/13-cleanup.md)
|
||||||
* [Cleaning Up](docs/14-cleanup.md)
|
|
||||||
|
|
||||||
## Implementations
|
## Implementations
|
||||||
|
|
||||||
|
@ -51,4 +49,3 @@ The following list includes the implementations by the community.
|
||||||
| Repository | Notes |
|
| Repository | Notes |
|
||||||
| ---------- | ----- |
|
| ---------- | ----- |
|
||||||
| [Vagrant, Ansible & Cilium; no Cloud!](https://github.com/developer-friendly/kubernetes-the-hard-way) | [A complete guide and explanation](https://developer-friendly.blog/2024/03/03/kubernetes-the-hard-way/) |
|
| [Vagrant, Ansible & Cilium; no Cloud!](https://github.com/developer-friendly/kubernetes-the-hard-way) | [A complete guide and explanation](https://developer-friendly.blog/2024/03/03/kubernetes-the-hard-way/) |
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,206 @@
|
||||||
|
[req]
|
||||||
|
distinguished_name = req_distinguished_name
|
||||||
|
prompt = no
|
||||||
|
x509_extensions = ca_x509_extensions
|
||||||
|
|
||||||
|
[ca_x509_extensions]
|
||||||
|
basicConstraints = CA:TRUE
|
||||||
|
keyUsage = cRLSign, keyCertSign
|
||||||
|
|
||||||
|
[req_distinguished_name]
|
||||||
|
C = US
|
||||||
|
ST = Washington
|
||||||
|
L = Seattle
|
||||||
|
CN = CA
|
||||||
|
|
||||||
|
[admin]
|
||||||
|
distinguished_name = admin_distinguished_name
|
||||||
|
prompt = no
|
||||||
|
req_extensions = default_req_extensions
|
||||||
|
|
||||||
|
[admin_distinguished_name]
|
||||||
|
CN = admin
|
||||||
|
O = system:masters
|
||||||
|
|
||||||
|
# Service Accounts
|
||||||
|
#
|
||||||
|
# The Kubernetes Controller Manager leverages a key pair to generate
|
||||||
|
# and sign service account tokens as described in the
|
||||||
|
# [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/)
|
||||||
|
# documentation.
|
||||||
|
|
||||||
|
[service-accounts]
|
||||||
|
distinguished_name = service-accounts_distinguished_name
|
||||||
|
prompt = no
|
||||||
|
req_extensions = default_req_extensions
|
||||||
|
|
||||||
|
[service-accounts_distinguished_name]
|
||||||
|
CN = service-accounts
|
||||||
|
|
||||||
|
# Worker Nodes
|
||||||
|
#
|
||||||
|
# Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/admin/authorization/node/)
|
||||||
|
# called Node Authorizer, that specifically authorizes API requests made
|
||||||
|
# by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet).
|
||||||
|
# In order to be authorized by the Node Authorizer, Kubelets must use a credential
|
||||||
|
# that identifies them as being in the `system:nodes` group, with a username
|
||||||
|
# of `system:node:<nodeName>`.
|
||||||
|
|
||||||
|
[node-0]
|
||||||
|
distinguished_name = node-0_distinguished_name
|
||||||
|
prompt = no
|
||||||
|
req_extensions = node-0_req_extensions
|
||||||
|
|
||||||
|
[node-0_req_extensions]
|
||||||
|
basicConstraints = CA:FALSE
|
||||||
|
extendedKeyUsage = clientAuth, serverAuth
|
||||||
|
keyUsage = critical, digitalSignature, keyEncipherment
|
||||||
|
nsCertType = client
|
||||||
|
nsComment = "Node-0 Certificate"
|
||||||
|
subjectAltName = DNS:node-0, IP:127.0.0.1
|
||||||
|
subjectKeyIdentifier = hash
|
||||||
|
|
||||||
|
[node-0_distinguished_name]
|
||||||
|
CN = system:node:node-0
|
||||||
|
O = system:nodes
|
||||||
|
C = US
|
||||||
|
ST = Washington
|
||||||
|
L = Seattle
|
||||||
|
|
||||||
|
[node-1]
|
||||||
|
distinguished_name = node-1_distinguished_name
|
||||||
|
prompt = no
|
||||||
|
req_extensions = node-1_req_extensions
|
||||||
|
|
||||||
|
[node-1_req_extensions]
|
||||||
|
basicConstraints = CA:FALSE
|
||||||
|
extendedKeyUsage = clientAuth, serverAuth
|
||||||
|
keyUsage = critical, digitalSignature, keyEncipherment
|
||||||
|
nsCertType = client
|
||||||
|
nsComment = "Node-1 Certificate"
|
||||||
|
subjectAltName = DNS:node-1, IP:127.0.0.1
|
||||||
|
subjectKeyIdentifier = hash
|
||||||
|
|
||||||
|
[node-1_distinguished_name]
|
||||||
|
CN = system:node:node-1
|
||||||
|
O = system:nodes
|
||||||
|
C = US
|
||||||
|
ST = Washington
|
||||||
|
L = Seattle
|
||||||
|
|
||||||
|
|
||||||
|
# Kube Proxy Section
|
||||||
|
[kube-proxy]
|
||||||
|
distinguished_name = kube-proxy_distinguished_name
|
||||||
|
prompt = no
|
||||||
|
req_extensions = kube-proxy_req_extensions
|
||||||
|
|
||||||
|
[kube-proxy_req_extensions]
|
||||||
|
basicConstraints = CA:FALSE
|
||||||
|
extendedKeyUsage = clientAuth, serverAuth
|
||||||
|
keyUsage = critical, digitalSignature, keyEncipherment
|
||||||
|
nsCertType = client
|
||||||
|
nsComment = "Kube Proxy Certificate"
|
||||||
|
subjectAltName = DNS:kube-proxy, IP:127.0.0.1
|
||||||
|
subjectKeyIdentifier = hash
|
||||||
|
|
||||||
|
[kube-proxy_distinguished_name]
|
||||||
|
CN = system:kube-proxy
|
||||||
|
O = system:node-proxier
|
||||||
|
C = US
|
||||||
|
ST = Washington
|
||||||
|
L = Seattle
|
||||||
|
|
||||||
|
|
||||||
|
# Controller Manager
|
||||||
|
[kube-controller-manager]
|
||||||
|
distinguished_name = kube-controller-manager_distinguished_name
|
||||||
|
prompt = no
|
||||||
|
req_extensions = kube-controller-manager_req_extensions
|
||||||
|
|
||||||
|
[kube-controller-manager_req_extensions]
|
||||||
|
basicConstraints = CA:FALSE
|
||||||
|
extendedKeyUsage = clientAuth, serverAuth
|
||||||
|
keyUsage = critical, digitalSignature, keyEncipherment
|
||||||
|
nsCertType = client
|
||||||
|
nsComment = "Kube Controller Manager Certificate"
|
||||||
|
subjectAltName = DNS:kube-proxy, IP:127.0.0.1
|
||||||
|
subjectKeyIdentifier = hash
|
||||||
|
|
||||||
|
[kube-controller-manager_distinguished_name]
|
||||||
|
CN = system:kube-controller-manager
|
||||||
|
O = system:kube-controller-manager
|
||||||
|
C = US
|
||||||
|
ST = Washington
|
||||||
|
L = Seattle
|
||||||
|
|
||||||
|
|
||||||
|
# Scheduler
|
||||||
|
[kube-scheduler]
|
||||||
|
distinguished_name = kube-scheduler_distinguished_name
|
||||||
|
prompt = no
|
||||||
|
req_extensions = kube-scheduler_req_extensions
|
||||||
|
|
||||||
|
[kube-scheduler_req_extensions]
|
||||||
|
basicConstraints = CA:FALSE
|
||||||
|
extendedKeyUsage = clientAuth, serverAuth
|
||||||
|
keyUsage = critical, digitalSignature, keyEncipherment
|
||||||
|
nsCertType = client
|
||||||
|
nsComment = "Kube Scheduler Certificate"
|
||||||
|
subjectAltName = DNS:kube-scheduler, IP:127.0.0.1
|
||||||
|
subjectKeyIdentifier = hash
|
||||||
|
|
||||||
|
[kube-scheduler_distinguished_name]
|
||||||
|
CN = system:kube-scheduler
|
||||||
|
O = system:system:kube-scheduler
|
||||||
|
C = US
|
||||||
|
ST = Washington
|
||||||
|
L = Seattle
|
||||||
|
|
||||||
|
|
||||||
|
# API Server
|
||||||
|
#
|
||||||
|
# The Kubernetes API server is automatically assigned the `kubernetes`
|
||||||
|
# internal dns name, which will be linked to the first IP address (`10.32.0.1`)
|
||||||
|
# from the address range (`10.32.0.0/24`) reserved for internal cluster
|
||||||
|
# services.
|
||||||
|
|
||||||
|
[kube-api-server]
|
||||||
|
distinguished_name = kube-api-server_distinguished_name
|
||||||
|
prompt = no
|
||||||
|
req_extensions = kube-api-server_req_extensions
|
||||||
|
|
||||||
|
[kube-api-server_req_extensions]
|
||||||
|
basicConstraints = CA:FALSE
|
||||||
|
extendedKeyUsage = clientAuth, serverAuth
|
||||||
|
keyUsage = critical, digitalSignature, keyEncipherment
|
||||||
|
nsCertType = client
|
||||||
|
nsComment = "Kube Scheduler Certificate"
|
||||||
|
subjectAltName = @kube-api-server_alt_names
|
||||||
|
subjectKeyIdentifier = hash
|
||||||
|
|
||||||
|
[kube-api-server_alt_names]
|
||||||
|
IP.0 = 127.0.0.1
|
||||||
|
IP.1 = 10.32.0.1
|
||||||
|
DNS.0 = kubernetes
|
||||||
|
DNS.1 = kubernetes.default
|
||||||
|
DNS.2 = kubernetes.default.svc
|
||||||
|
DNS.3 = kubernetes.default.svc.cluster
|
||||||
|
DNS.4 = kubernetes.svc.cluster.local
|
||||||
|
DNS.5 = server.kubernetes.local
|
||||||
|
DNS.6 = api-server.kubernetes.local
|
||||||
|
|
||||||
|
[kube-api-server_distinguished_name]
|
||||||
|
CN = kubernetes
|
||||||
|
C = US
|
||||||
|
ST = Washington
|
||||||
|
L = Seattle
|
||||||
|
|
||||||
|
|
||||||
|
[default_req_extensions]
|
||||||
|
basicConstraints = CA:FALSE
|
||||||
|
extendedKeyUsage = clientAuth
|
||||||
|
keyUsage = critical, digitalSignature, keyEncipherment
|
||||||
|
nsCertType = client
|
||||||
|
nsComment = "Admin Client Certificate"
|
||||||
|
subjectKeyIdentifier = hash
|
|
@ -0,0 +1,15 @@
|
||||||
|
{
|
||||||
|
"cniVersion": "1.0.0",
|
||||||
|
"name": "bridge",
|
||||||
|
"type": "bridge",
|
||||||
|
"bridge": "cni0",
|
||||||
|
"isGateway": true,
|
||||||
|
"ipMasq": true,
|
||||||
|
"ipam": {
|
||||||
|
"type": "host-local",
|
||||||
|
"ranges": [
|
||||||
|
[{"subnet": "SUBNET"}]
|
||||||
|
],
|
||||||
|
"routes": [{"dst": "0.0.0.0/0"}]
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,5 @@
|
||||||
|
{
|
||||||
|
"cniVersion": "1.1.0",
|
||||||
|
"name": "lo",
|
||||||
|
"type": "loopback"
|
||||||
|
}
|
|
@ -0,0 +1,13 @@
|
||||||
|
version = 2
|
||||||
|
|
||||||
|
[plugins."io.containerd.grpc.v1.cri"]
|
||||||
|
[plugins."io.containerd.grpc.v1.cri".containerd]
|
||||||
|
snapshotter = "overlayfs"
|
||||||
|
default_runtime_name = "runc"
|
||||||
|
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
||||||
|
runtime_type = "io.containerd.runc.v2"
|
||||||
|
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
|
||||||
|
SystemdCgroup = true
|
||||||
|
[plugins."io.containerd.grpc.v1.cri".cni]
|
||||||
|
bin_dir = "/opt/cni/bin"
|
||||||
|
conf_dir = "/etc/cni/net.d"
|
|
@ -0,0 +1,33 @@
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRole
|
||||||
|
metadata:
|
||||||
|
annotations:
|
||||||
|
rbac.authorization.kubernetes.io/autoupdate: "true"
|
||||||
|
labels:
|
||||||
|
kubernetes.io/bootstrapping: rbac-defaults
|
||||||
|
name: system:kube-apiserver-to-kubelet
|
||||||
|
rules:
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- nodes/proxy
|
||||||
|
- nodes/stats
|
||||||
|
- nodes/log
|
||||||
|
- nodes/spec
|
||||||
|
- nodes/metrics
|
||||||
|
verbs:
|
||||||
|
- "*"
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: system:kube-apiserver
|
||||||
|
namespace: ""
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: system:kube-apiserver-to-kubelet
|
||||||
|
subjects:
|
||||||
|
- apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: User
|
||||||
|
name: kubernetes
|
|
@ -0,0 +1,6 @@
|
||||||
|
kind: KubeProxyConfiguration
|
||||||
|
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
||||||
|
clientConnection:
|
||||||
|
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
|
||||||
|
mode: "iptables"
|
||||||
|
clusterCIDR: "10.200.0.0/16"
|
|
@ -0,0 +1,6 @@
|
||||||
|
apiVersion: kubescheduler.config.k8s.io/v1
|
||||||
|
kind: KubeSchedulerConfiguration
|
||||||
|
clientConnection:
|
||||||
|
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
|
||||||
|
leaderElection:
|
||||||
|
leaderElect: true
|
|
@ -0,0 +1,21 @@
|
||||||
|
kind: KubeletConfiguration
|
||||||
|
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||||
|
authentication:
|
||||||
|
anonymous:
|
||||||
|
enabled: false
|
||||||
|
webhook:
|
||||||
|
enabled: true
|
||||||
|
x509:
|
||||||
|
clientCAFile: "/var/lib/kubelet/ca.crt"
|
||||||
|
authorization:
|
||||||
|
mode: Webhook
|
||||||
|
clusterDomain: "cluster.local"
|
||||||
|
clusterDNS:
|
||||||
|
- "10.32.0.10"
|
||||||
|
cgroupDriver: systemd
|
||||||
|
containerRuntimeEndpoint: "unix:///var/run/containerd/containerd.sock"
|
||||||
|
podCIDR: "SUBNET"
|
||||||
|
resolvConf: "/etc/resolv.conf"
|
||||||
|
runtimeRequestTimeout: "15m"
|
||||||
|
tlsCertFile: "/var/lib/kubelet/kubelet.crt"
|
||||||
|
tlsPrivateKeyFile: "/var/lib/kubelet/kubelet.key"
|
|
@ -1,180 +0,0 @@
|
||||||
apiVersion: v1
|
|
||||||
kind: ServiceAccount
|
|
||||||
metadata:
|
|
||||||
name: coredns
|
|
||||||
namespace: kube-system
|
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: ClusterRole
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
kubernetes.io/bootstrapping: rbac-defaults
|
|
||||||
name: system:coredns
|
|
||||||
rules:
|
|
||||||
- apiGroups:
|
|
||||||
- ""
|
|
||||||
resources:
|
|
||||||
- endpoints
|
|
||||||
- services
|
|
||||||
- pods
|
|
||||||
- namespaces
|
|
||||||
verbs:
|
|
||||||
- list
|
|
||||||
- watch
|
|
||||||
- apiGroups:
|
|
||||||
- ""
|
|
||||||
resources:
|
|
||||||
- nodes
|
|
||||||
verbs:
|
|
||||||
- get
|
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: ClusterRoleBinding
|
|
||||||
metadata:
|
|
||||||
annotations:
|
|
||||||
rbac.authorization.kubernetes.io/autoupdate: "true"
|
|
||||||
labels:
|
|
||||||
kubernetes.io/bootstrapping: rbac-defaults
|
|
||||||
name: system:coredns
|
|
||||||
roleRef:
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: ClusterRole
|
|
||||||
name: system:coredns
|
|
||||||
subjects:
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: coredns
|
|
||||||
namespace: kube-system
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: ConfigMap
|
|
||||||
metadata:
|
|
||||||
name: coredns
|
|
||||||
namespace: kube-system
|
|
||||||
data:
|
|
||||||
Corefile: |
|
|
||||||
.:53 {
|
|
||||||
errors
|
|
||||||
health
|
|
||||||
ready
|
|
||||||
kubernetes cluster.local in-addr.arpa ip6.arpa {
|
|
||||||
pods insecure
|
|
||||||
fallthrough in-addr.arpa ip6.arpa
|
|
||||||
}
|
|
||||||
prometheus :9153
|
|
||||||
cache 30
|
|
||||||
loop
|
|
||||||
reload
|
|
||||||
loadbalance
|
|
||||||
}
|
|
||||||
---
|
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: coredns
|
|
||||||
namespace: kube-system
|
|
||||||
labels:
|
|
||||||
k8s-app: kube-dns
|
|
||||||
kubernetes.io/name: "CoreDNS"
|
|
||||||
spec:
|
|
||||||
replicas: 2
|
|
||||||
strategy:
|
|
||||||
type: RollingUpdate
|
|
||||||
rollingUpdate:
|
|
||||||
maxUnavailable: 1
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
k8s-app: kube-dns
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
k8s-app: kube-dns
|
|
||||||
spec:
|
|
||||||
priorityClassName: system-cluster-critical
|
|
||||||
serviceAccountName: coredns
|
|
||||||
tolerations:
|
|
||||||
- key: "CriticalAddonsOnly"
|
|
||||||
operator: "Exists"
|
|
||||||
nodeSelector:
|
|
||||||
beta.kubernetes.io/os: linux
|
|
||||||
containers:
|
|
||||||
- name: coredns
|
|
||||||
image: coredns/coredns:1.7.0
|
|
||||||
imagePullPolicy: IfNotPresent
|
|
||||||
resources:
|
|
||||||
limits:
|
|
||||||
memory: 170Mi
|
|
||||||
requests:
|
|
||||||
cpu: 100m
|
|
||||||
memory: 70Mi
|
|
||||||
args: [ "-conf", "/etc/coredns/Corefile" ]
|
|
||||||
volumeMounts:
|
|
||||||
- name: config-volume
|
|
||||||
mountPath: /etc/coredns
|
|
||||||
readOnly: true
|
|
||||||
ports:
|
|
||||||
- containerPort: 53
|
|
||||||
name: dns
|
|
||||||
protocol: UDP
|
|
||||||
- containerPort: 53
|
|
||||||
name: dns-tcp
|
|
||||||
protocol: TCP
|
|
||||||
- containerPort: 9153
|
|
||||||
name: metrics
|
|
||||||
protocol: TCP
|
|
||||||
securityContext:
|
|
||||||
allowPrivilegeEscalation: false
|
|
||||||
capabilities:
|
|
||||||
add:
|
|
||||||
- NET_BIND_SERVICE
|
|
||||||
drop:
|
|
||||||
- all
|
|
||||||
readOnlyRootFilesystem: true
|
|
||||||
livenessProbe:
|
|
||||||
httpGet:
|
|
||||||
path: /health
|
|
||||||
port: 8080
|
|
||||||
scheme: HTTP
|
|
||||||
initialDelaySeconds: 60
|
|
||||||
timeoutSeconds: 5
|
|
||||||
successThreshold: 1
|
|
||||||
failureThreshold: 5
|
|
||||||
readinessProbe:
|
|
||||||
httpGet:
|
|
||||||
path: /ready
|
|
||||||
port: 8181
|
|
||||||
scheme: HTTP
|
|
||||||
dnsPolicy: Default
|
|
||||||
volumes:
|
|
||||||
- name: config-volume
|
|
||||||
configMap:
|
|
||||||
name: coredns
|
|
||||||
items:
|
|
||||||
- key: Corefile
|
|
||||||
path: Corefile
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: kube-dns
|
|
||||||
namespace: kube-system
|
|
||||||
annotations:
|
|
||||||
prometheus.io/port: "9153"
|
|
||||||
prometheus.io/scrape: "true"
|
|
||||||
labels:
|
|
||||||
k8s-app: kube-dns
|
|
||||||
kubernetes.io/cluster-service: "true"
|
|
||||||
kubernetes.io/name: "CoreDNS"
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
k8s-app: kube-dns
|
|
||||||
clusterIP: 10.32.0.10
|
|
||||||
ports:
|
|
||||||
- name: dns
|
|
||||||
port: 53
|
|
||||||
protocol: UDP
|
|
||||||
- name: dns-tcp
|
|
||||||
port: 53
|
|
||||||
protocol: TCP
|
|
||||||
- name: metrics
|
|
||||||
port: 9153
|
|
||||||
protocol: TCP
|
|
|
@ -1,206 +0,0 @@
|
||||||
# Copyright 2016 The Kubernetes Authors.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: kube-dns
|
|
||||||
namespace: kube-system
|
|
||||||
labels:
|
|
||||||
k8s-app: kube-dns
|
|
||||||
kubernetes.io/cluster-service: "true"
|
|
||||||
addonmanager.kubernetes.io/mode: Reconcile
|
|
||||||
kubernetes.io/name: "KubeDNS"
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
k8s-app: kube-dns
|
|
||||||
clusterIP: 10.32.0.10
|
|
||||||
ports:
|
|
||||||
- name: dns
|
|
||||||
port: 53
|
|
||||||
protocol: UDP
|
|
||||||
- name: dns-tcp
|
|
||||||
port: 53
|
|
||||||
protocol: TCP
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: ServiceAccount
|
|
||||||
metadata:
|
|
||||||
name: kube-dns
|
|
||||||
namespace: kube-system
|
|
||||||
labels:
|
|
||||||
kubernetes.io/cluster-service: "true"
|
|
||||||
addonmanager.kubernetes.io/mode: Reconcile
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: ConfigMap
|
|
||||||
metadata:
|
|
||||||
name: kube-dns
|
|
||||||
namespace: kube-system
|
|
||||||
labels:
|
|
||||||
addonmanager.kubernetes.io/mode: EnsureExists
|
|
||||||
---
|
|
||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: kube-dns
|
|
||||||
namespace: kube-system
|
|
||||||
labels:
|
|
||||||
k8s-app: kube-dns
|
|
||||||
kubernetes.io/cluster-service: "true"
|
|
||||||
addonmanager.kubernetes.io/mode: Reconcile
|
|
||||||
spec:
|
|
||||||
# replicas: not specified here:
|
|
||||||
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
|
|
||||||
# 2. Default is 1.
|
|
||||||
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
|
|
||||||
strategy:
|
|
||||||
rollingUpdate:
|
|
||||||
maxSurge: 10%
|
|
||||||
maxUnavailable: 0
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
k8s-app: kube-dns
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
k8s-app: kube-dns
|
|
||||||
annotations:
|
|
||||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
|
||||||
spec:
|
|
||||||
tolerations:
|
|
||||||
- key: "CriticalAddonsOnly"
|
|
||||||
operator: "Exists"
|
|
||||||
volumes:
|
|
||||||
- name: kube-dns-config
|
|
||||||
configMap:
|
|
||||||
name: kube-dns
|
|
||||||
optional: true
|
|
||||||
containers:
|
|
||||||
- name: kubedns
|
|
||||||
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
|
|
||||||
resources:
|
|
||||||
# TODO: Set memory limits when we've profiled the container for large
|
|
||||||
# clusters, then set request = limit to keep this container in
|
|
||||||
# guaranteed class. Currently, this container falls into the
|
|
||||||
# "burstable" category so the kubelet doesn't backoff from restarting it.
|
|
||||||
limits:
|
|
||||||
memory: 170Mi
|
|
||||||
requests:
|
|
||||||
cpu: 100m
|
|
||||||
memory: 70Mi
|
|
||||||
livenessProbe:
|
|
||||||
httpGet:
|
|
||||||
path: /healthcheck/kubedns
|
|
||||||
port: 10054
|
|
||||||
scheme: HTTP
|
|
||||||
initialDelaySeconds: 60
|
|
||||||
timeoutSeconds: 5
|
|
||||||
successThreshold: 1
|
|
||||||
failureThreshold: 5
|
|
||||||
readinessProbe:
|
|
||||||
httpGet:
|
|
||||||
path: /readiness
|
|
||||||
port: 8081
|
|
||||||
scheme: HTTP
|
|
||||||
# we poll on pod startup for the Kubernetes master service and
|
|
||||||
# only setup the /readiness HTTP server once that's available.
|
|
||||||
initialDelaySeconds: 3
|
|
||||||
timeoutSeconds: 5
|
|
||||||
args:
|
|
||||||
- --domain=cluster.local.
|
|
||||||
- --dns-port=10053
|
|
||||||
- --config-dir=/kube-dns-config
|
|
||||||
- --v=2
|
|
||||||
env:
|
|
||||||
- name: PROMETHEUS_PORT
|
|
||||||
value: "10055"
|
|
||||||
ports:
|
|
||||||
- containerPort: 10053
|
|
||||||
name: dns-local
|
|
||||||
protocol: UDP
|
|
||||||
- containerPort: 10053
|
|
||||||
name: dns-tcp-local
|
|
||||||
protocol: TCP
|
|
||||||
- containerPort: 10055
|
|
||||||
name: metrics
|
|
||||||
protocol: TCP
|
|
||||||
volumeMounts:
|
|
||||||
- name: kube-dns-config
|
|
||||||
mountPath: /kube-dns-config
|
|
||||||
- name: dnsmasq
|
|
||||||
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
|
|
||||||
livenessProbe:
|
|
||||||
httpGet:
|
|
||||||
path: /healthcheck/dnsmasq
|
|
||||||
port: 10054
|
|
||||||
scheme: HTTP
|
|
||||||
initialDelaySeconds: 60
|
|
||||||
timeoutSeconds: 5
|
|
||||||
successThreshold: 1
|
|
||||||
failureThreshold: 5
|
|
||||||
args:
|
|
||||||
- -v=2
|
|
||||||
- -logtostderr
|
|
||||||
- -configDir=/etc/k8s/dns/dnsmasq-nanny
|
|
||||||
- -restartDnsmasq=true
|
|
||||||
- --
|
|
||||||
- -k
|
|
||||||
- --cache-size=1000
|
|
||||||
- --no-negcache
|
|
||||||
- --log-facility=-
|
|
||||||
- --server=/cluster.local/127.0.0.1#10053
|
|
||||||
- --server=/in-addr.arpa/127.0.0.1#10053
|
|
||||||
- --server=/ip6.arpa/127.0.0.1#10053
|
|
||||||
ports:
|
|
||||||
- containerPort: 53
|
|
||||||
name: dns
|
|
||||||
protocol: UDP
|
|
||||||
- containerPort: 53
|
|
||||||
name: dns-tcp
|
|
||||||
protocol: TCP
|
|
||||||
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
cpu: 150m
|
|
||||||
memory: 20Mi
|
|
||||||
volumeMounts:
|
|
||||||
- name: kube-dns-config
|
|
||||||
mountPath: /etc/k8s/dns/dnsmasq-nanny
|
|
||||||
- name: sidecar
|
|
||||||
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
|
|
||||||
livenessProbe:
|
|
||||||
httpGet:
|
|
||||||
path: /metrics
|
|
||||||
port: 10054
|
|
||||||
scheme: HTTP
|
|
||||||
initialDelaySeconds: 60
|
|
||||||
timeoutSeconds: 5
|
|
||||||
successThreshold: 1
|
|
||||||
failureThreshold: 5
|
|
||||||
args:
|
|
||||||
- --v=2
|
|
||||||
- --logtostderr
|
|
||||||
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
|
|
||||||
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
|
|
||||||
ports:
|
|
||||||
- containerPort: 10054
|
|
||||||
name: metrics
|
|
||||||
protocol: TCP
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
memory: 20Mi
|
|
||||||
cpu: 10m
|
|
||||||
dnsPolicy: Default # Don't use cluster DNS.
|
|
||||||
serviceAccountName: kube-dns
|
|
|
@ -1,63 +1,30 @@
|
||||||
# Prerequisites
|
# Prerequisites
|
||||||
|
|
||||||
## Google Cloud Platform
|
In this lab you will review the machine requirements necessary to follow this tutorial.
|
||||||
|
|
||||||
This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits.
|
## Virtual or Physical Machines
|
||||||
|
|
||||||
[Estimated cost](https://cloud.google.com/products/calculator#id=873932bc-0840-4176-b0fa-a8cfd4ca61ae) to run this tutorial: $0.23 per hour ($5.50 per day).
|
This tutorial requires four (4) virtual or physical ARM64 machines running Debian 12 (bookworm). The follow table list the four machines and thier CPU, memory, and storage requirements.
|
||||||
|
|
||||||
> The compute resources required for this tutorial exceed the Google Cloud Platform free tier.
|
| Name | Description | CPU | RAM | Storage |
|
||||||
|
|---------|------------------------|-----|-------|---------|
|
||||||
|
| jumpbox | Administration host | 1 | 512MB | 10GB |
|
||||||
|
| server | Kubernetes server | 1 | 2GB | 20GB |
|
||||||
|
| node-0 | Kubernetes worker node | 1 | 2GB | 20GB |
|
||||||
|
| node-1 | Kubernetes worker node | 1 | 2GB | 20GB |
|
||||||
|
|
||||||
## Google Cloud Platform SDK
|
How you provision the machines is up to you, the only requirement is that each machine meet the above system requirements including the machine specs and OS version. Once you have all four machine provisioned, verify the system requirements by running the `uname` command on each machine:
|
||||||
|
|
||||||
### Install the Google Cloud SDK
|
```bash
|
||||||
|
uname -mov
|
||||||
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
|
|
||||||
|
|
||||||
Verify the Google Cloud SDK version is 338.0.0 or higher:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud version
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Set a Default Compute Region and Zone
|
After running the `uname` command you should see the following output:
|
||||||
|
|
||||||
This tutorial assumes a default compute region and zone have been configured.
|
```text
|
||||||
|
#1 SMP Debian 6.1.55-1 (2023-09-29) aarch64 GNU/Linux
|
||||||
If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud init
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials:
|
You maybe surprised to see `aarch64` here, but that is the official name for the Arm Architecture 64-bit instruction set. You will often see `arm64` used by Apple, and the maintainers of the Linux kernel, when referring to support for `aarch64`. This tutorial will use `arm64` consistently throughout to avoid confusion.
|
||||||
|
|
||||||
```
|
Next: [setting-up-the-jumpbox](02-jumpbox.md)
|
||||||
gcloud auth login
|
|
||||||
```
|
|
||||||
|
|
||||||
Next set a default compute region and compute zone:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud config set compute/region us-west1
|
|
||||||
```
|
|
||||||
|
|
||||||
Set a default compute zone:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud config set compute/zone us-west1-c
|
|
||||||
```
|
|
||||||
|
|
||||||
> Use the `gcloud compute zones list` command to view additional regions and zones.
|
|
||||||
|
|
||||||
## Running Commands in Parallel with tmux
|
|
||||||
|
|
||||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with synchronize-panes enabled to speed up the provisioning process.
|
|
||||||
|
|
||||||
> The use of tmux is optional and not required to complete this tutorial.
|
|
||||||
|
|
||||||
![tmux screenshot](images/tmux-screenshot.png)
|
|
||||||
|
|
||||||
> Enable synchronize-panes by pressing `ctrl+b` followed by `shift+:`. Next type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
|
|
||||||
|
|
||||||
Next: [Installing the Client Tools](02-client-tools.md)
|
|
||||||
|
|
|
@ -1,118 +0,0 @@
|
||||||
# Installing the Client Tools
|
|
||||||
|
|
||||||
In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl).
|
|
||||||
|
|
||||||
|
|
||||||
## Install CFSSL
|
|
||||||
|
|
||||||
The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
|
|
||||||
|
|
||||||
Download and install `cfssl` and `cfssljson`:
|
|
||||||
|
|
||||||
### OS X
|
|
||||||
|
|
||||||
```
|
|
||||||
curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/darwin/cfssl
|
|
||||||
curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/darwin/cfssljson
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
chmod +x cfssl cfssljson
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo mv cfssl cfssljson /usr/local/bin/
|
|
||||||
```
|
|
||||||
|
|
||||||
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
|
|
||||||
|
|
||||||
```
|
|
||||||
brew install cfssl
|
|
||||||
```
|
|
||||||
|
|
||||||
### Linux
|
|
||||||
|
|
||||||
```
|
|
||||||
wget -q --show-progress --https-only --timestamping \
|
|
||||||
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \
|
|
||||||
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
chmod +x cfssl cfssljson
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo mv cfssl cfssljson /usr/local/bin/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verification
|
|
||||||
|
|
||||||
Verify `cfssl` and `cfssljson` version 1.4.1 or higher is installed:
|
|
||||||
|
|
||||||
```
|
|
||||||
cfssl version
|
|
||||||
```
|
|
||||||
|
|
||||||
> output
|
|
||||||
|
|
||||||
```
|
|
||||||
Version: 1.4.1
|
|
||||||
Runtime: go1.12.12
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
cfssljson --version
|
|
||||||
```
|
|
||||||
```
|
|
||||||
Version: 1.4.1
|
|
||||||
Runtime: go1.12.12
|
|
||||||
```
|
|
||||||
|
|
||||||
## Install kubectl
|
|
||||||
|
|
||||||
The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
|
|
||||||
|
|
||||||
### OS X
|
|
||||||
|
|
||||||
```
|
|
||||||
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/darwin/amd64/kubectl
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
chmod +x kubectl
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo mv kubectl /usr/local/bin/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Linux
|
|
||||||
|
|
||||||
```
|
|
||||||
wget https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
chmod +x kubectl
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo mv kubectl /usr/local/bin/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verification
|
|
||||||
|
|
||||||
Verify `kubectl` version 1.21.0 or higher is installed:
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl version --client
|
|
||||||
```
|
|
||||||
|
|
||||||
> output
|
|
||||||
|
|
||||||
```
|
|
||||||
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
|
|
||||||
```
|
|
||||||
|
|
||||||
Next: [Provisioning Compute Resources](03-compute-resources.md)
|
|
|
@ -0,0 +1,121 @@
|
||||||
|
# Set Up The Jumpbox
|
||||||
|
|
||||||
|
In this lab you will set up one of the four machines to be a `jumpbox`. This machine will be used to run commands in this tutorial. While a dedicated machine is being used to ensure consistency, these commands can also be run from just about any machine including your personal workstation running macOS or Linux.
|
||||||
|
|
||||||
|
Think of the `jumpbox` as the administration machine that you will use as a home base when setting up your Kubernetes cluster from the ground up. One thing we need to do before we get started is install a few command line utilities and clone the Kubernetes The Hard Way git repository, which contains some additional configuration files that will be used to configure various Kubernetes components throughout this tutorial.
|
||||||
|
|
||||||
|
Log in to the `jumpbox`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh root@jumpbox
|
||||||
|
```
|
||||||
|
|
||||||
|
All commands will be run as the `root` user. This is being done for the sake of convenience, and will help reduce the number of commands required to set everything up.
|
||||||
|
|
||||||
|
### Install Command Line Utilities
|
||||||
|
|
||||||
|
Now that you are logged into the `jumpbox` machine as the `root` user, you will install the command line utilities that will be used to preform various tasks throughout the tutorial.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
apt-get -y install wget curl vim openssl git
|
||||||
|
```
|
||||||
|
|
||||||
|
### Sync GitHub Repository
|
||||||
|
|
||||||
|
Now it's time to download a copy of this tutorial which contains the configuration files and templates that will be used build your Kubernetes cluster from the ground up. Clone the Kubernetes The Hard Way git repository using the `git` command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone --depth 1 \
|
||||||
|
https://github.com/kelseyhightower/kubernetes-the-hard-way.git
|
||||||
|
```
|
||||||
|
|
||||||
|
Change into the `kubernetes-the-hard-way` directory:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd kubernetes-the-hard-way
|
||||||
|
```
|
||||||
|
|
||||||
|
This will be the working directory for the rest of the tutorial. If you ever get lost run the `pwd` command to verify you are in the right directory when running commands on the `jumpbox`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pwd
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
/root/kubernetes-the-hard-way
|
||||||
|
```
|
||||||
|
|
||||||
|
### Download Binaries
|
||||||
|
|
||||||
|
In this section you will download the binaries for the various Kubernetes components. The binaries will be stored in the `downloads` directory on the `jumpbox`, which will reduce the amount of internet bandwidth required to complete this tutorial as we avoid downloading the binaries multiple times for each machine in our Kubernetes cluster.
|
||||||
|
|
||||||
|
From the `kubernetes-the-hard-way` directory create a `downloads` directory using the `mkdir` command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir downloads
|
||||||
|
```
|
||||||
|
|
||||||
|
The binaries that will be downloaded are listed in the `downloads.txt` file, which you can review using the `cat` command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat downloads.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
Download the binaries listed in the `downloads.txt` file using the `wget` command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wget -q --show-progress \
|
||||||
|
--https-only \
|
||||||
|
--timestamping \
|
||||||
|
-P downloads \
|
||||||
|
-i downloads.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
Depending on your internet connection speed it may take a while to download the `584` megabytes of binaries, and once the download is complete, you can list them using the `ls` command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ls -loh downloads
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
total 584M
|
||||||
|
-rw-r--r-- 1 root 41M May 9 13:35 cni-plugins-linux-arm64-v1.3.0.tgz
|
||||||
|
-rw-r--r-- 1 root 34M Oct 26 15:21 containerd-1.7.8-linux-arm64.tar.gz
|
||||||
|
-rw-r--r-- 1 root 22M Aug 14 00:19 crictl-v1.28.0-linux-arm.tar.gz
|
||||||
|
-rw-r--r-- 1 root 15M Jul 11 02:30 etcd-v3.4.27-linux-arm64.tar.gz
|
||||||
|
-rw-r--r-- 1 root 111M Oct 18 07:34 kube-apiserver
|
||||||
|
-rw-r--r-- 1 root 107M Oct 18 07:34 kube-controller-manager
|
||||||
|
-rw-r--r-- 1 root 51M Oct 18 07:34 kube-proxy
|
||||||
|
-rw-r--r-- 1 root 52M Oct 18 07:34 kube-scheduler
|
||||||
|
-rw-r--r-- 1 root 46M Oct 18 07:34 kubectl
|
||||||
|
-rw-r--r-- 1 root 101M Oct 18 07:34 kubelet
|
||||||
|
-rw-r--r-- 1 root 9.6M Aug 10 18:57 runc.arm64
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install kubectl
|
||||||
|
|
||||||
|
In this section you will install the `kubectl`, the official Kubernetes client command line tool, on the `jumpbox` machine. `kubectl will be used to interact with the Kubernetes control once your cluster is provisioned later in this tutorial.
|
||||||
|
|
||||||
|
Use the `chmod` command to make the `kubectl` binary executable and move it to the `/usr/local/bin/` directory:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
{
|
||||||
|
chmod +x downloads/kubectl
|
||||||
|
cp downloads/kubectl /usr/local/bin/
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
At this point `kubectl` is installed and can be verified by running the `kubectl` command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl version --client
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
Client Version: v1.28.3
|
||||||
|
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
|
||||||
|
```
|
||||||
|
|
||||||
|
At this point the `jumpbox` has been set up with all the command line tools and utilities necessary to complete the labs in this tutorial.
|
||||||
|
|
||||||
|
Next: [Provisioning Compute Resources](03-compute-resources.md)
|
|
@ -1,227 +1,225 @@
|
||||||
# Provisioning Compute Resources
|
# Provisioning Compute Resources
|
||||||
|
|
||||||
Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single [compute zone](https://cloud.google.com/compute/docs/regions-zones/regions-zones).
|
Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the machines required for setting up a Kubernetes cluster.
|
||||||
|
|
||||||
> Ensure a default compute zone and region have been set as described in the [Prerequisites](01-prerequisites.md#set-a-default-compute-region-and-zone) lab.
|
## Machine Database
|
||||||
|
|
||||||
## Networking
|
This tutorial will leverage a text file, which will serve as a machine database, to store the various machine attributes that will be used when setting up the Kubernetes control plane and worker nodes. The following schema represents entries in the machine database, one entry per line:
|
||||||
|
|
||||||
The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model) assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) can limit how groups of containers are allowed to communicate with each other and external network endpoints.
|
```text
|
||||||
|
IPV4_ADDRESS FQDN HOSTNAME POD_SUBNET
|
||||||
> Setting up network policies is out of scope for this tutorial.
|
|
||||||
|
|
||||||
### Virtual Private Cloud Network
|
|
||||||
|
|
||||||
In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) (VPC) network will be setup to host the Kubernetes cluster.
|
|
||||||
|
|
||||||
Create the `kubernetes-the-hard-way` custom VPC network:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
|
|
||||||
```
|
```
|
||||||
|
|
||||||
A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
|
Each of the columns corresponds to a machine IP address `IPV4_ADDRESS`, fully qualified domain name `FQDN`, host name `HOSTNAME`, and the IP subnet `POD_SUBNET`. Kubernetes assigns one IP address per `pod` and the `POD_SUBNET` represents the unique IP address range assigned to each machine in the cluster for doing so.
|
||||||
|
|
||||||
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
|
Here is an example machine database similar to the one used when creating this tutorial. Notice the IP addresses have been masked out. Your machines can be assigned any IP address as long as each machine is reachable from each other and the `jumpbox`.
|
||||||
|
|
||||||
```
|
```bash
|
||||||
gcloud compute networks subnets create kubernetes \
|
cat machines.txt
|
||||||
--network kubernetes-the-hard-way \
|
|
||||||
--range 10.240.0.0/24
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> The `10.240.0.0/24` IP address range can host up to 254 compute instances.
|
```text
|
||||||
|
XXX.XXX.XXX.XXX server.kubernetes.local server
|
||||||
### Firewall Rules
|
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0 10.200.0.0/24
|
||||||
|
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1 10.200.1.0/24
|
||||||
Create a firewall rule that allows internal communication across all protocols:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
|
|
||||||
--allow tcp,udp,icmp \
|
|
||||||
--network kubernetes-the-hard-way \
|
|
||||||
--source-ranges 10.240.0.0/24,10.200.0.0/16
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
|
Now it's your turn to create a `machines.txt` file with the details for the three machines you will be using to create your Kubernetes cluster. Use the example machine database from above and add the details for your machines.
|
||||||
|
|
||||||
```
|
|
||||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
|
|
||||||
--allow tcp:22,tcp:6443,icmp \
|
|
||||||
--network kubernetes-the-hard-way \
|
|
||||||
--source-ranges 0.0.0.0/0
|
|
||||||
```
|
|
||||||
|
|
||||||
> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients.
|
|
||||||
|
|
||||||
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
|
|
||||||
```
|
|
||||||
|
|
||||||
> output
|
|
||||||
|
|
||||||
```
|
|
||||||
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
|
|
||||||
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False
|
|
||||||
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp Fals
|
|
||||||
```
|
|
||||||
|
|
||||||
### Kubernetes Public IP Address
|
|
||||||
|
|
||||||
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud compute addresses create kubernetes-the-hard-way \
|
|
||||||
--region $(gcloud config get-value compute/region)
|
|
||||||
```
|
|
||||||
|
|
||||||
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
|
|
||||||
```
|
|
||||||
|
|
||||||
> output
|
|
||||||
|
|
||||||
```
|
|
||||||
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
|
|
||||||
kubernetes-the-hard-way XX.XXX.XXX.XXX EXTERNAL us-west1 RESERVED
|
|
||||||
```
|
|
||||||
|
|
||||||
## Compute Instances
|
|
||||||
|
|
||||||
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 20.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
|
|
||||||
|
|
||||||
### Kubernetes Controllers
|
|
||||||
|
|
||||||
Create three compute instances which will host the Kubernetes control plane:
|
|
||||||
|
|
||||||
```
|
|
||||||
for i in 0 1 2; do
|
|
||||||
gcloud compute instances create controller-${i} \
|
|
||||||
--async \
|
|
||||||
--boot-disk-size 200GB \
|
|
||||||
--can-ip-forward \
|
|
||||||
--image-family ubuntu-2004-lts \
|
|
||||||
--image-project ubuntu-os-cloud \
|
|
||||||
--machine-type e2-standard-2 \
|
|
||||||
--private-network-ip 10.240.0.1${i} \
|
|
||||||
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
|
|
||||||
--subnet kubernetes \
|
|
||||||
--tags kubernetes-the-hard-way,controller
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
### Kubernetes Workers
|
|
||||||
|
|
||||||
Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The `pod-cidr` instance metadata will be used to expose pod subnet allocations to compute instances at runtime.
|
|
||||||
|
|
||||||
> The Kubernetes cluster CIDR range is defined by the Controller Manager's `--cluster-cidr` flag. In this tutorial the cluster CIDR range will be set to `10.200.0.0/16`, which supports 254 subnets.
|
|
||||||
|
|
||||||
Create three compute instances which will host the Kubernetes worker nodes:
|
|
||||||
|
|
||||||
```
|
|
||||||
for i in 0 1 2; do
|
|
||||||
gcloud compute instances create worker-${i} \
|
|
||||||
--async \
|
|
||||||
--boot-disk-size 200GB \
|
|
||||||
--can-ip-forward \
|
|
||||||
--image-family ubuntu-2004-lts \
|
|
||||||
--image-project ubuntu-os-cloud \
|
|
||||||
--machine-type e2-standard-2 \
|
|
||||||
--metadata pod-cidr=10.200.${i}.0/24 \
|
|
||||||
--private-network-ip 10.240.0.2${i} \
|
|
||||||
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
|
|
||||||
--subnet kubernetes \
|
|
||||||
--tags kubernetes-the-hard-way,worker
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verification
|
|
||||||
|
|
||||||
List the compute instances in your default compute zone:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud compute instances list --filter="tags.items=kubernetes-the-hard-way"
|
|
||||||
```
|
|
||||||
|
|
||||||
> output
|
|
||||||
|
|
||||||
```
|
|
||||||
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
|
|
||||||
controller-0 us-west1-c e2-standard-2 10.240.0.10 XX.XX.XX.XXX RUNNING
|
|
||||||
controller-1 us-west1-c e2-standard-2 10.240.0.11 XX.XXX.XXX.XX RUNNING
|
|
||||||
controller-2 us-west1-c e2-standard-2 10.240.0.12 XX.XXX.XX.XXX RUNNING
|
|
||||||
worker-0 us-west1-c e2-standard-2 10.240.0.20 XX.XX.XXX.XXX RUNNING
|
|
||||||
worker-1 us-west1-c e2-standard-2 10.240.0.21 XX.XX.XX.XXX RUNNING
|
|
||||||
worker-2 us-west1-c e2-standard-2 10.240.0.22 XX.XXX.XX.XX RUNNING
|
|
||||||
```
|
|
||||||
|
|
||||||
## Configuring SSH Access
|
## Configuring SSH Access
|
||||||
|
|
||||||
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as described in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
|
SSH will be used to configure the machines in the cluster. Verify that you have `root` SSH access to each machine listed in your machine database. You may need to enable root SSH access on each node by updating the sshd_config file and restarting the SSH server.
|
||||||
|
|
||||||
Test SSH access to the `controller-0` compute instances:
|
### Enable root SSH Access
|
||||||
|
|
||||||
```
|
If `root` SSH access is enabled for each of your machines you can skip this section.
|
||||||
gcloud compute ssh controller-0
|
|
||||||
|
By default, a new `debian` install disables SSH access for the `root` user. This is done for security reasons as the `root` user is a well known user on Linux systems, and if a weak password is used on a machine connected to the internet, well, let's just say it's only a matter of time before your machine belongs to someone else. As mention earlier, we are going to enable `root` access over SSH in order to streamline the steps in this tutorial. Security is a tradeoff, and in this case, we are optimizing for convenience. On each machine login via SSH using your user account, then switch to the `root` user using the `su` command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
su - root
|
||||||
```
|
```
|
||||||
|
|
||||||
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue:
|
Edit the `/etc/ssh/sshd_config` SSH daemon configuration file and the `PermitRootLogin` option to `yes`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sed -i \
|
||||||
|
's/^#PermitRootLogin.*/PermitRootLogin yes/' \
|
||||||
|
/etc/ssh/sshd_config
|
||||||
```
|
```
|
||||||
WARNING: The public SSH key file for gcloud does not exist.
|
|
||||||
WARNING: The private SSH key file for gcloud does not exist.
|
Restart the `sshd` SSH server to pick up the updated configuration file:
|
||||||
WARNING: You do not have an SSH key for gcloud.
|
|
||||||
WARNING: SSH keygen will be executed to generate a key.
|
```bash
|
||||||
|
systemctl restart sshd
|
||||||
|
```
|
||||||
|
|
||||||
|
### Generate and Distribute SSH Keys
|
||||||
|
|
||||||
|
In this section you will generate and distribute an SSH keypair to the `server`, `node-0`, and `node-1`, machines, which will be used to run commands on those machines throughout this tutorial. Run the following commands from the `jumpbox` machine.
|
||||||
|
|
||||||
|
Generate a new SSH key:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh-keygen
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
Generating public/private rsa key pair.
|
Generating public/private rsa key pair.
|
||||||
|
Enter file in which to save the key (/root/.ssh/id_rsa):
|
||||||
Enter passphrase (empty for no passphrase):
|
Enter passphrase (empty for no passphrase):
|
||||||
Enter same passphrase again:
|
Enter same passphrase again:
|
||||||
|
Your identification has been saved in /root/.ssh/id_rsa
|
||||||
|
Your public key has been saved in /root/.ssh/id_rsa.pub
|
||||||
```
|
```
|
||||||
|
|
||||||
At this point the generated SSH keys will be uploaded and stored in your project:
|
Copy the SSH public key to each machine:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
Your identification has been saved in /home/$USER/.ssh/google_compute_engine.
|
while read IP FQDN HOST SUBNET; do
|
||||||
Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub.
|
ssh-copy-id root@${IP}
|
||||||
The key fingerprint is:
|
done < machines.txt
|
||||||
SHA256:nz1i8jHmgQuGt+WscqP5SeIaSy5wyIJeL71MuV+QruE $USER@$HOSTNAME
|
|
||||||
The key's randomart image is:
|
|
||||||
+---[RSA 2048]----+
|
|
||||||
| |
|
|
||||||
| |
|
|
||||||
| |
|
|
||||||
| . |
|
|
||||||
|o. oS |
|
|
||||||
|=... .o .o o |
|
|
||||||
|+.+ =+=.+.X o |
|
|
||||||
|.+ ==O*B.B = . |
|
|
||||||
| .+.=EB++ o |
|
|
||||||
+----[SHA256]-----+
|
|
||||||
Updating project ssh metadata...-Updated [https://www.googleapis.com/compute/v1/projects/$PROJECT_ID].
|
|
||||||
Updating project ssh metadata...done.
|
|
||||||
Waiting for SSH key to propagate.
|
|
||||||
```
|
```
|
||||||
|
|
||||||
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
|
Once each key is added, verify SSH public key access is working:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-1042-gcp x86_64)
|
while read IP FQDN HOST SUBNET; do
|
||||||
...
|
ssh -n root@${IP} uname -o -m
|
||||||
|
done < machines.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
Type `exit` at the prompt to exit the `controller-0` compute instance:
|
```text
|
||||||
|
aarch64 GNU/Linux
|
||||||
|
aarch64 GNU/Linux
|
||||||
|
aarch64 GNU/Linux
|
||||||
|
```
|
||||||
|
|
||||||
```
|
## Hostnames
|
||||||
$USER@controller-0:~$ exit
|
|
||||||
```
|
|
||||||
> output
|
|
||||||
|
|
||||||
|
In this section you will assign hostnames to the `server`, `node-0`, and `node-1` machines. The hostname will be used when executing commands from the `jumpbox` to each machine. The hostname also play a major role within the cluster. Instead of Kubernetes clients using an IP address to issue commands to the Kubernetes API server, those client will use the `server` hostname instead. Hostnames are also used by each worker machine, `node-0` and `node-1` when registering with a given Kubernetes cluster.
|
||||||
|
|
||||||
|
To configure the hostname for each machine, run the following commands on the `jumpbox`.
|
||||||
|
|
||||||
|
Set the hostname on each machine listed in the `machines.txt` file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
while read IP FQDN HOST SUBNET; do
|
||||||
|
CMD="sed -i 's/^127.0.1.1.*/127.0.1.1\t${FQDN} ${HOST}/' /etc/hosts"
|
||||||
|
ssh -n root@${IP} "$CMD"
|
||||||
|
ssh -n root@${IP} hostnamectl hostname ${HOST}
|
||||||
|
done < machines.txt
|
||||||
```
|
```
|
||||||
logout
|
|
||||||
Connection to XX.XX.XX.XXX closed
|
Verify the hostname is set on each machine:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
while read IP FQDN HOST SUBNET; do
|
||||||
|
ssh -n root@${IP} hostname --fqdn
|
||||||
|
done < machines.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
server.kubernetes.local
|
||||||
|
node-0.kubernetes.local
|
||||||
|
node-1.kubernetes.local
|
||||||
|
```
|
||||||
|
|
||||||
|
## DNS
|
||||||
|
|
||||||
|
In this section you will generate a DNS `hosts` file which will be appended to `jumpbox` local `/etc/hosts` file and to the `/etc/hosts` file of all three machines used for this tutorial. This will allow each machine to be reachable using a hostname such as `server`, `node-0`, or `node-1`.
|
||||||
|
|
||||||
|
Create a new `hosts` file and add a header to identify the machines being added:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "" > hosts
|
||||||
|
echo "# Kubernetes The Hard Way" >> hosts
|
||||||
|
```
|
||||||
|
|
||||||
|
Generate a DNS entry for each machine in the `machines.txt` file and append it to the `hosts` file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
while read IP FQDN HOST SUBNET; do
|
||||||
|
ENTRY="${IP} ${FQDN} ${HOST}"
|
||||||
|
echo $ENTRY >> hosts
|
||||||
|
done < machines.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
Review the DNS entries in the `hosts` file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat hosts
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
|
||||||
|
# Kubernetes The Hard Way
|
||||||
|
XXX.XXX.XXX.XXX server.kubernetes.local server
|
||||||
|
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0
|
||||||
|
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Adding DNS Entries To A Local Machine
|
||||||
|
|
||||||
|
In this section you will append the DNS entries from the `hosts` file to the local `/etc/hosts` file on your `jumpbox` machine.
|
||||||
|
|
||||||
|
Append the DNS entries from `hosts` to `/etc/hosts`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat hosts >> /etc/hosts
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify that the `/etc/hosts` file has been updated:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat /etc/hosts
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
127.0.0.1 localhost
|
||||||
|
127.0.1.1 jumpbox
|
||||||
|
|
||||||
|
# The following lines are desirable for IPv6 capable hosts
|
||||||
|
::1 localhost ip6-localhost ip6-loopback
|
||||||
|
ff02::1 ip6-allnodes
|
||||||
|
ff02::2 ip6-allrouters
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Kubernetes The Hard Way
|
||||||
|
XXX.XXX.XXX.XXX server.kubernetes.local server
|
||||||
|
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0
|
||||||
|
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1
|
||||||
|
```
|
||||||
|
|
||||||
|
At this point you should be able to SSH to each machine listed in the `machines.txt` file using a hostname.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
for host in server node-0 node-1
|
||||||
|
do ssh root@${host} uname -o -m -n
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
server aarch64 GNU/Linux
|
||||||
|
node-0 aarch64 GNU/Linux
|
||||||
|
node-1 aarch64 GNU/Linux
|
||||||
|
```
|
||||||
|
|
||||||
|
## Adding DNS Entries To The Remote Machines
|
||||||
|
|
||||||
|
In this section you will append the DNS entries from `hosts` to `/etc/hosts` on each machine listed in the `machines.txt` text file.
|
||||||
|
|
||||||
|
Copy the `hosts` file to each machine and append the contents to `/etc/hosts`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
while read IP FQDN HOST SUBNET; do
|
||||||
|
scp hosts root@${HOST}:~/
|
||||||
|
ssh -n \
|
||||||
|
root@${HOST} "cat hosts >> /etc/hosts"
|
||||||
|
done < machines.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
At this point hostnames can be used when connecting to machines from your `jumpbox` machine, or any of the three machines in the Kubernetes cluster. Instead of using IP addresess you can now connect to machines using a hostname such as `server`, `node-0`, or `node-1`.
|
||||||
|
|
||||||
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)
|
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)
|
||||||
|
|
|
@ -1,412 +1,106 @@
|
||||||
# Provisioning a CA and Generating TLS Certificates
|
# Provisioning a CA and Generating TLS Certificates
|
||||||
|
|
||||||
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
|
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using openssl to bootstrap a Certificate Authority, and generate TLS certificates for the following components: kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy. The commands in this section should be run from the `jumpbox`.
|
||||||
|
|
||||||
## Certificate Authority
|
## Certificate Authority
|
||||||
|
|
||||||
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
|
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates for the other Kubernetes components. Setting up CA and generating certificates using `openssl` can be time-consuming, especially when doing it for the first time. To streamline this lab, I've included an openssl configuration file `ca.conf`, which defines all the details needed to generate certificates for each Kubernetes component.
|
||||||
|
|
||||||
|
Take a moment to review the `ca.conf` configuration file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat ca.conf
|
||||||
|
```
|
||||||
|
|
||||||
|
You don't need to understand everything in the `ca.conf` file to complete this tutorial, but you should consider it a starting point for learning `openssl` and the configuration that goes into managing certificates at a high level.
|
||||||
|
|
||||||
|
Every certificate authority starts with a private key and root certificate. In this section we are going to create a self-signed certificate authority, and while that's all we need for this tutorial, this shouldn't be considered something you would do in a real-world production level environment.
|
||||||
|
|
||||||
Generate the CA configuration file, certificate, and private key:
|
Generate the CA configuration file, certificate, and private key:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
|
openssl genrsa -out ca.key 4096
|
||||||
cat > ca-config.json <<EOF
|
openssl req -x509 -new -sha512 -noenc \
|
||||||
{
|
-key ca.key -days 3653 \
|
||||||
"signing": {
|
-config ca.conf \
|
||||||
"default": {
|
-out ca.crt
|
||||||
"expiry": "8760h"
|
|
||||||
},
|
|
||||||
"profiles": {
|
|
||||||
"kubernetes": {
|
|
||||||
"usages": ["signing", "key encipherment", "server auth", "client auth"],
|
|
||||||
"expiry": "8760h"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cat > ca-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "Kubernetes",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "Kubernetes",
|
|
||||||
"OU": "CA",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
|
|
||||||
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
```txt
|
||||||
ca-key.pem
|
ca.crt ca.key
|
||||||
ca.pem
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Client and Server Certificates
|
## Create Client and Server Certificates
|
||||||
|
|
||||||
In this section you will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes `admin` user.
|
In this section you will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes `admin` user.
|
||||||
|
|
||||||
### The Admin Client Certificate
|
Generate the certificates and private keys:
|
||||||
|
|
||||||
Generate the `admin` client certificate and private key:
|
```bash
|
||||||
|
certs=(
|
||||||
```
|
"admin" "node-0" "node-1"
|
||||||
{
|
"kube-proxy" "kube-scheduler"
|
||||||
|
"kube-controller-manager"
|
||||||
cat > admin-csr.json <<EOF
|
"kube-api-server"
|
||||||
{
|
"service-accounts"
|
||||||
"CN": "admin",
|
)
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "system:masters",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-profile=kubernetes \
|
|
||||||
admin-csr.json | cfssljson -bare admin
|
|
||||||
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Results:
|
```bash
|
||||||
|
for i in ${certs[*]}; do
|
||||||
|
openssl genrsa -out "${i}.key" 4096
|
||||||
|
|
||||||
```
|
openssl req -new -key "${i}.key" -sha256 \
|
||||||
admin-key.pem
|
-config "ca.conf" -section ${i} \
|
||||||
admin.pem
|
-out "${i}.csr"
|
||||||
```
|
|
||||||
|
|
||||||
### The Kubelet Client Certificates
|
openssl x509 -req -days 3653 -in "${i}.csr" \
|
||||||
|
-copy_extensions copyall \
|
||||||
Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/admin/authorization/node/) called Node Authorizer, that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:<nodeName>`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
|
-sha256 -CA "ca.crt" \
|
||||||
|
-CAkey "ca.key" \
|
||||||
Generate a certificate and private key for each Kubernetes worker node:
|
-CAcreateserial \
|
||||||
|
-out "${i}.crt"
|
||||||
```
|
|
||||||
for instance in worker-0 worker-1 worker-2; do
|
|
||||||
cat > ${instance}-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "system:node:${instance}",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "system:nodes",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
|
|
||||||
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
|
|
||||||
|
|
||||||
INTERNAL_IP=$(gcloud compute instances describe ${instance} \
|
|
||||||
--format 'value(networkInterfaces[0].networkIP)')
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
|
|
||||||
-profile=kubernetes \
|
|
||||||
${instance}-csr.json | cfssljson -bare ${instance}
|
|
||||||
done
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
Results:
|
The results of running the above command will generate a private key, certificate request, and signed SSL certificate for each of the Kubernetes components. You can list the generated files with the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ls -1 *.crt *.key *.csr
|
||||||
```
|
```
|
||||||
worker-0-key.pem
|
|
||||||
worker-0.pem
|
|
||||||
worker-1-key.pem
|
|
||||||
worker-1.pem
|
|
||||||
worker-2-key.pem
|
|
||||||
worker-2.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
### The Controller Manager Client Certificate
|
|
||||||
|
|
||||||
Generate the `kube-controller-manager` client certificate and private key:
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
|
|
||||||
cat > kube-controller-manager-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "system:kube-controller-manager",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "system:kube-controller-manager",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-profile=kubernetes \
|
|
||||||
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
|
|
||||||
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
|
||||||
|
|
||||||
```
|
|
||||||
kube-controller-manager-key.pem
|
|
||||||
kube-controller-manager.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### The Kube Proxy Client Certificate
|
|
||||||
|
|
||||||
Generate the `kube-proxy` client certificate and private key:
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
|
|
||||||
cat > kube-proxy-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "system:kube-proxy",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "system:node-proxier",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-profile=kubernetes \
|
|
||||||
kube-proxy-csr.json | cfssljson -bare kube-proxy
|
|
||||||
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
|
||||||
|
|
||||||
```
|
|
||||||
kube-proxy-key.pem
|
|
||||||
kube-proxy.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
### The Scheduler Client Certificate
|
|
||||||
|
|
||||||
Generate the `kube-scheduler` client certificate and private key:
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
|
|
||||||
cat > kube-scheduler-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "system:kube-scheduler",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "system:kube-scheduler",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-profile=kubernetes \
|
|
||||||
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
|
|
||||||
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
|
||||||
|
|
||||||
```
|
|
||||||
kube-scheduler-key.pem
|
|
||||||
kube-scheduler.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
### The Kubernetes API Server Certificate
|
|
||||||
|
|
||||||
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
|
|
||||||
|
|
||||||
Generate the Kubernetes API Server certificate and private key:
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
|
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
|
||||||
--region $(gcloud config get-value compute/region) \
|
|
||||||
--format 'value(address)')
|
|
||||||
|
|
||||||
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
|
|
||||||
|
|
||||||
cat > kubernetes-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "kubernetes",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "Kubernetes",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
|
|
||||||
-profile=kubernetes \
|
|
||||||
kubernetes-csr.json | cfssljson -bare kubernetes
|
|
||||||
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
> The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server) lab.
|
|
||||||
|
|
||||||
Results:
|
|
||||||
|
|
||||||
```
|
|
||||||
kubernetes-key.pem
|
|
||||||
kubernetes.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
## The Service Account Key Pair
|
|
||||||
|
|
||||||
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
|
|
||||||
|
|
||||||
Generate the `service-account` certificate and private key:
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
|
|
||||||
cat > service-account-csr.json <<EOF
|
|
||||||
{
|
|
||||||
"CN": "service-accounts",
|
|
||||||
"key": {
|
|
||||||
"algo": "rsa",
|
|
||||||
"size": 2048
|
|
||||||
},
|
|
||||||
"names": [
|
|
||||||
{
|
|
||||||
"C": "US",
|
|
||||||
"L": "Portland",
|
|
||||||
"O": "Kubernetes",
|
|
||||||
"OU": "Kubernetes The Hard Way",
|
|
||||||
"ST": "Oregon"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
cfssl gencert \
|
|
||||||
-ca=ca.pem \
|
|
||||||
-ca-key=ca-key.pem \
|
|
||||||
-config=ca-config.json \
|
|
||||||
-profile=kubernetes \
|
|
||||||
service-account-csr.json | cfssljson -bare service-account
|
|
||||||
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Results:
|
|
||||||
|
|
||||||
```
|
|
||||||
service-account-key.pem
|
|
||||||
service-account.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Distribute the Client and Server Certificates
|
## Distribute the Client and Server Certificates
|
||||||
|
|
||||||
Copy the appropriate certificates and private keys to each worker instance:
|
In this section you will copy the various certificates to each machine under a directory that each Kubernetes components will search for the certificate pair. In a real-world environment these certificates should be treated like a set of sensitive secrets as they are often used as credentials by the Kubernetes components to authenticate to each other.
|
||||||
|
|
||||||
```
|
Copy the appropriate certificates and private keys to the `node-0` and `node-1` machines:
|
||||||
for instance in worker-0 worker-1 worker-2; do
|
|
||||||
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
|
```bash
|
||||||
|
for host in node-0 node-1; do
|
||||||
|
ssh root@$host mkdir /var/lib/kubelet/
|
||||||
|
|
||||||
|
scp ca.crt root@$host:/var/lib/kubelet/
|
||||||
|
|
||||||
|
scp $host.crt \
|
||||||
|
root@$host:/var/lib/kubelet/kubelet.crt
|
||||||
|
|
||||||
|
scp $host.key \
|
||||||
|
root@$host:/var/lib/kubelet/kubelet.key
|
||||||
done
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
Copy the appropriate certificates and private keys to each controller instance:
|
Copy the appropriate certificates and private keys to the `server` machine:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
for instance in controller-0 controller-1 controller-2; do
|
scp \
|
||||||
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
|
ca.key ca.crt \
|
||||||
service-account-key.pem service-account.pem ${instance}:~/
|
kube-api-server.key kube-api-server.crt \
|
||||||
done
|
service-accounts.key service-accounts.crt \
|
||||||
|
root@server:~/
|
||||||
```
|
```
|
||||||
|
|
||||||
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
|
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
|
||||||
|
|
|
@ -4,19 +4,7 @@ In this lab you will generate [Kubernetes configuration files](https://kubernete
|
||||||
|
|
||||||
## Client Authentication Configs
|
## Client Authentication Configs
|
||||||
|
|
||||||
In this section you will generate kubeconfig files for the `controller manager`, `kubelet`, `kube-proxy`, and `scheduler` clients and the `admin` user.
|
In this section you will generate kubeconfig files for the `kubelet` and the `admin` user.
|
||||||
|
|
||||||
### Kubernetes Public IP Address
|
|
||||||
|
|
||||||
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
|
|
||||||
|
|
||||||
Retrieve the `kubernetes-the-hard-way` static IP address:
|
|
||||||
|
|
||||||
```
|
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
|
||||||
--region $(gcloud config get-value compute/region) \
|
|
||||||
--format 'value(address)')
|
|
||||||
```
|
|
||||||
|
|
||||||
### The kubelet Kubernetes Configuration File
|
### The kubelet Kubernetes Configuration File
|
||||||
|
|
||||||
|
@ -24,54 +12,54 @@ When generating kubeconfig files for Kubelets the client certificate matching th
|
||||||
|
|
||||||
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
|
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
|
||||||
|
|
||||||
Generate a kubeconfig file for each worker node:
|
Generate a kubeconfig file the node-0 worker node:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
for instance in worker-0 worker-1 worker-2; do
|
for host in node-0 node-1; do
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
--certificate-authority=ca.pem \
|
--certificate-authority=ca.crt \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
|
--server=https://server.kubernetes.local:6443 \
|
||||||
--kubeconfig=${instance}.kubeconfig
|
--kubeconfig=${host}.kubeconfig
|
||||||
|
|
||||||
kubectl config set-credentials system:node:${instance} \
|
kubectl config set-credentials system:node:${host} \
|
||||||
--client-certificate=${instance}.pem \
|
--client-certificate=${host}.crt \
|
||||||
--client-key=${instance}-key.pem \
|
--client-key=${host}.key \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--kubeconfig=${instance}.kubeconfig
|
--kubeconfig=${host}.kubeconfig
|
||||||
|
|
||||||
kubectl config set-context default \
|
kubectl config set-context default \
|
||||||
--cluster=kubernetes-the-hard-way \
|
--cluster=kubernetes-the-hard-way \
|
||||||
--user=system:node:${instance} \
|
--user=system:node:${host} \
|
||||||
--kubeconfig=${instance}.kubeconfig
|
--kubeconfig=${host}.kubeconfig
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
|
kubectl config use-context default \
|
||||||
|
--kubeconfig=${host}.kubeconfig
|
||||||
done
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
```text
|
||||||
worker-0.kubeconfig
|
node-0.kubeconfig
|
||||||
worker-1.kubeconfig
|
node-1.kubeconfig
|
||||||
worker-2.kubeconfig
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### The kube-proxy Kubernetes Configuration File
|
### The kube-proxy Kubernetes Configuration File
|
||||||
|
|
||||||
Generate a kubeconfig file for the `kube-proxy` service:
|
Generate a kubeconfig file for the `kube-proxy` service:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
--certificate-authority=ca.pem \
|
--certificate-authority=ca.crt \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
|
--server=https://server.kubernetes.local:6443 \
|
||||||
--kubeconfig=kube-proxy.kubeconfig
|
--kubeconfig=kube-proxy.kubeconfig
|
||||||
|
|
||||||
kubectl config set-credentials system:kube-proxy \
|
kubectl config set-credentials system:kube-proxy \
|
||||||
--client-certificate=kube-proxy.pem \
|
--client-certificate=kube-proxy.crt \
|
||||||
--client-key=kube-proxy-key.pem \
|
--client-key=kube-proxy.key \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--kubeconfig=kube-proxy.kubeconfig
|
--kubeconfig=kube-proxy.kubeconfig
|
||||||
|
|
||||||
|
@ -80,13 +68,14 @@ Generate a kubeconfig file for the `kube-proxy` service:
|
||||||
--user=system:kube-proxy \
|
--user=system:kube-proxy \
|
||||||
--kubeconfig=kube-proxy.kubeconfig
|
--kubeconfig=kube-proxy.kubeconfig
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
|
kubectl config use-context default \
|
||||||
|
--kubeconfig=kube-proxy.kubeconfig
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
```text
|
||||||
kube-proxy.kubeconfig
|
kube-proxy.kubeconfig
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -94,17 +83,17 @@ kube-proxy.kubeconfig
|
||||||
|
|
||||||
Generate a kubeconfig file for the `kube-controller-manager` service:
|
Generate a kubeconfig file for the `kube-controller-manager` service:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
--certificate-authority=ca.pem \
|
--certificate-authority=ca.crt \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--server=https://127.0.0.1:6443 \
|
--server=https://server.kubernetes.local:6443 \
|
||||||
--kubeconfig=kube-controller-manager.kubeconfig
|
--kubeconfig=kube-controller-manager.kubeconfig
|
||||||
|
|
||||||
kubectl config set-credentials system:kube-controller-manager \
|
kubectl config set-credentials system:kube-controller-manager \
|
||||||
--client-certificate=kube-controller-manager.pem \
|
--client-certificate=kube-controller-manager.crt \
|
||||||
--client-key=kube-controller-manager-key.pem \
|
--client-key=kube-controller-manager.key \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--kubeconfig=kube-controller-manager.kubeconfig
|
--kubeconfig=kube-controller-manager.kubeconfig
|
||||||
|
|
||||||
|
@ -113,13 +102,14 @@ Generate a kubeconfig file for the `kube-controller-manager` service:
|
||||||
--user=system:kube-controller-manager \
|
--user=system:kube-controller-manager \
|
||||||
--kubeconfig=kube-controller-manager.kubeconfig
|
--kubeconfig=kube-controller-manager.kubeconfig
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
|
kubectl config use-context default \
|
||||||
|
--kubeconfig=kube-controller-manager.kubeconfig
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
```text
|
||||||
kube-controller-manager.kubeconfig
|
kube-controller-manager.kubeconfig
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -128,17 +118,17 @@ kube-controller-manager.kubeconfig
|
||||||
|
|
||||||
Generate a kubeconfig file for the `kube-scheduler` service:
|
Generate a kubeconfig file for the `kube-scheduler` service:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
--certificate-authority=ca.pem \
|
--certificate-authority=ca.crt \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--server=https://127.0.0.1:6443 \
|
--server=https://server.kubernetes.local:6443 \
|
||||||
--kubeconfig=kube-scheduler.kubeconfig
|
--kubeconfig=kube-scheduler.kubeconfig
|
||||||
|
|
||||||
kubectl config set-credentials system:kube-scheduler \
|
kubectl config set-credentials system:kube-scheduler \
|
||||||
--client-certificate=kube-scheduler.pem \
|
--client-certificate=kube-scheduler.crt \
|
||||||
--client-key=kube-scheduler-key.pem \
|
--client-key=kube-scheduler.key \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--kubeconfig=kube-scheduler.kubeconfig
|
--kubeconfig=kube-scheduler.kubeconfig
|
||||||
|
|
||||||
|
@ -147,13 +137,14 @@ Generate a kubeconfig file for the `kube-scheduler` service:
|
||||||
--user=system:kube-scheduler \
|
--user=system:kube-scheduler \
|
||||||
--kubeconfig=kube-scheduler.kubeconfig
|
--kubeconfig=kube-scheduler.kubeconfig
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
|
kubectl config use-context default \
|
||||||
|
--kubeconfig=kube-scheduler.kubeconfig
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
```text
|
||||||
kube-scheduler.kubeconfig
|
kube-scheduler.kubeconfig
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -161,17 +152,17 @@ kube-scheduler.kubeconfig
|
||||||
|
|
||||||
Generate a kubeconfig file for the `admin` user:
|
Generate a kubeconfig file for the `admin` user:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
--certificate-authority=ca.pem \
|
--certificate-authority=ca.crt \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--server=https://127.0.0.1:6443 \
|
--server=https://127.0.0.1:6443 \
|
||||||
--kubeconfig=admin.kubeconfig
|
--kubeconfig=admin.kubeconfig
|
||||||
|
|
||||||
kubectl config set-credentials admin \
|
kubectl config set-credentials admin \
|
||||||
--client-certificate=admin.pem \
|
--client-certificate=admin.crt \
|
||||||
--client-key=admin-key.pem \
|
--client-key=admin.key \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--kubeconfig=admin.kubeconfig
|
--kubeconfig=admin.kubeconfig
|
||||||
|
|
||||||
|
@ -180,35 +171,40 @@ Generate a kubeconfig file for the `admin` user:
|
||||||
--user=admin \
|
--user=admin \
|
||||||
--kubeconfig=admin.kubeconfig
|
--kubeconfig=admin.kubeconfig
|
||||||
|
|
||||||
kubectl config use-context default --kubeconfig=admin.kubeconfig
|
kubectl config use-context default \
|
||||||
|
--kubeconfig=admin.kubeconfig
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Results:
|
Results:
|
||||||
|
|
||||||
```
|
```text
|
||||||
admin.kubeconfig
|
admin.kubeconfig
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
##
|
|
||||||
|
|
||||||
## Distribute the Kubernetes Configuration Files
|
## Distribute the Kubernetes Configuration Files
|
||||||
|
|
||||||
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
|
Copy the `kubelet` and `kube-proxy` kubeconfig files to the node-0 instance:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
for instance in worker-0 worker-1 worker-2; do
|
for host in node-0 node-1; do
|
||||||
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
|
ssh root@$host "mkdir /var/lib/{kube-proxy,kubelet}"
|
||||||
|
|
||||||
|
scp kube-proxy.kubeconfig \
|
||||||
|
root@$host:/var/lib/kube-proxy/kubeconfig \
|
||||||
|
|
||||||
|
scp ${host}.kubeconfig \
|
||||||
|
root@$host:/var/lib/kubelet/kubeconfig
|
||||||
done
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
|
Copy the `kube-controller-manager` and `kube-scheduler` kubeconfig files to the controller instance:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
for instance in controller-0 controller-1 controller-2; do
|
scp admin.kubeconfig \
|
||||||
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
|
kube-controller-manager.kubeconfig \
|
||||||
done
|
kube-scheduler.kubeconfig \
|
||||||
|
root@server:~/
|
||||||
```
|
```
|
||||||
|
|
||||||
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
|
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
|
||||||
|
|
|
@ -8,36 +8,23 @@ In this lab you will generate an encryption key and an [encryption config](https
|
||||||
|
|
||||||
Generate an encryption key:
|
Generate an encryption key:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
|
export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
|
||||||
```
|
```
|
||||||
|
|
||||||
## The Encryption Config File
|
## The Encryption Config File
|
||||||
|
|
||||||
Create the `encryption-config.yaml` encryption config file:
|
Create the `encryption-config.yaml` encryption config file:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
cat > encryption-config.yaml <<EOF
|
envsubst < configs/encryption-config.yaml \
|
||||||
kind: EncryptionConfig
|
> encryption-config.yaml
|
||||||
apiVersion: v1
|
|
||||||
resources:
|
|
||||||
- resources:
|
|
||||||
- secrets
|
|
||||||
providers:
|
|
||||||
- aescbc:
|
|
||||||
keys:
|
|
||||||
- name: key1
|
|
||||||
secret: ${ENCRYPTION_KEY}
|
|
||||||
- identity: {}
|
|
||||||
EOF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Copy the `encryption-config.yaml` encryption config file to each controller instance:
|
Copy the `encryption-config.yaml` encryption config file to each controller instance:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
for instance in controller-0 controller-1 controller-2; do
|
scp encryption-config.yaml root@server:~/
|
||||||
gcloud compute scp encryption-config.yaml ${instance}:~/
|
|
||||||
done
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
|
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
|
||||||
|
|
|
@ -4,125 +4,73 @@ Kubernetes components are stateless and store cluster state in [etcd](https://gi
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
|
Copy `etcd` binaries and systemd unit files to the `server` instance:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
gcloud compute ssh controller-0
|
scp \
|
||||||
|
downloads/etcd-v3.4.27-linux-arm64.tar.gz \
|
||||||
|
units/etcd.service \
|
||||||
|
root@server:~/
|
||||||
```
|
```
|
||||||
|
|
||||||
### Running commands in parallel with tmux
|
The commands in this lab must be run on the `server` machine. Login to the `server` machine using the `ssh` command. Example:
|
||||||
|
|
||||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
|
|
||||||
|
|
||||||
## Bootstrapping an etcd Cluster Member
|
|
||||||
|
|
||||||
### Download and Install the etcd Binaries
|
|
||||||
|
|
||||||
Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project:
|
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh root@server
|
||||||
```
|
```
|
||||||
wget -q --show-progress --https-only --timestamping \
|
|
||||||
"https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz"
|
## Bootstrapping an etcd Cluster
|
||||||
```
|
|
||||||
|
### Install the etcd Binaries
|
||||||
|
|
||||||
Extract and install the `etcd` server and the `etcdctl` command line utility:
|
Extract and install the `etcd` server and the `etcdctl` command line utility:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
tar -xvf etcd-v3.4.15-linux-amd64.tar.gz
|
tar -xvf etcd-v3.4.27-linux-arm64.tar.gz
|
||||||
sudo mv etcd-v3.4.15-linux-amd64/etcd* /usr/local/bin/
|
mv etcd-v3.4.27-linux-arm64/etcd* /usr/local/bin/
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configure the etcd Server
|
### Configure the etcd Server
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
sudo mkdir -p /etc/etcd /var/lib/etcd
|
mkdir -p /etc/etcd /var/lib/etcd
|
||||||
sudo chmod 700 /var/lib/etcd
|
chmod 700 /var/lib/etcd
|
||||||
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
|
cp ca.crt kube-api-server.key kube-api-server.crt \
|
||||||
|
/etc/etcd/
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
|
|
||||||
|
|
||||||
```
|
|
||||||
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
|
|
||||||
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
|
|
||||||
```
|
|
||||||
|
|
||||||
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
|
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
|
||||||
|
|
||||||
```
|
|
||||||
ETCD_NAME=$(hostname -s)
|
|
||||||
```
|
|
||||||
|
|
||||||
Create the `etcd.service` systemd unit file:
|
Create the `etcd.service` systemd unit file:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
|
mv etcd.service /etc/systemd/system/
|
||||||
[Unit]
|
|
||||||
Description=etcd
|
|
||||||
Documentation=https://github.com/coreos
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=notify
|
|
||||||
ExecStart=/usr/local/bin/etcd \\
|
|
||||||
--name ${ETCD_NAME} \\
|
|
||||||
--cert-file=/etc/etcd/kubernetes.pem \\
|
|
||||||
--key-file=/etc/etcd/kubernetes-key.pem \\
|
|
||||||
--peer-cert-file=/etc/etcd/kubernetes.pem \\
|
|
||||||
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
|
|
||||||
--trusted-ca-file=/etc/etcd/ca.pem \\
|
|
||||||
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
|
|
||||||
--peer-client-cert-auth \\
|
|
||||||
--client-cert-auth \\
|
|
||||||
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
|
|
||||||
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
|
|
||||||
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
|
|
||||||
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
|
|
||||||
--initial-cluster-token etcd-cluster-0 \\
|
|
||||||
--initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
|
|
||||||
--initial-cluster-state new \\
|
|
||||||
--data-dir=/var/lib/etcd
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Start the etcd Server
|
### Start the etcd Server
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
sudo systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
sudo systemctl enable etcd
|
systemctl enable etcd
|
||||||
sudo systemctl start etcd
|
systemctl start etcd
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
|
|
||||||
|
|
||||||
## Verification
|
## Verification
|
||||||
|
|
||||||
List the etcd cluster members:
|
List the etcd cluster members:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
sudo ETCDCTL_API=3 etcdctl member list \
|
etcdctl member list
|
||||||
--endpoints=https://127.0.0.1:2379 \
|
|
||||||
--cacert=/etc/etcd/ca.pem \
|
|
||||||
--cert=/etc/etcd/kubernetes.pem \
|
|
||||||
--key=/etc/etcd/kubernetes-key.pem
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
```text
|
||||||
|
6702b0a34e2cfd39, started, controller, http://127.0.0.1:2380, http://127.0.0.1:2379, false
|
||||||
```
|
|
||||||
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379, false
|
|
||||||
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379, false
|
|
||||||
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379, false
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)
|
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)
|
||||||
|
|
|
@ -1,425 +1,179 @@
|
||||||
# Bootstrapping the Kubernetes Control Plane
|
# Bootstrapping the Kubernetes Control Plane
|
||||||
|
|
||||||
In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
|
In this lab you will bootstrap the Kubernetes control plane. The following components will be installed the controller machine: Kubernetes API Server, Scheduler, and Controller Manager.
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
|
Copy Kubernetes binaries and systemd unit files to the `server` instance:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
gcloud compute ssh controller-0
|
scp \
|
||||||
|
downloads/kube-apiserver \
|
||||||
|
downloads/kube-controller-manager \
|
||||||
|
downloads/kube-scheduler \
|
||||||
|
downloads/kubectl \
|
||||||
|
units/kube-apiserver.service \
|
||||||
|
units/kube-controller-manager.service \
|
||||||
|
units/kube-scheduler.service \
|
||||||
|
configs/kube-scheduler.yaml \
|
||||||
|
configs/kube-apiserver-to-kubelet.yaml \
|
||||||
|
root@server:~/
|
||||||
```
|
```
|
||||||
|
|
||||||
### Running commands in parallel with tmux
|
The commands in this lab must be run on the controller instance: `server`. Login to the controller instance using the `ssh` command. Example:
|
||||||
|
|
||||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
|
```bash
|
||||||
|
ssh root@server
|
||||||
|
```
|
||||||
|
|
||||||
## Provision the Kubernetes Control Plane
|
## Provision the Kubernetes Control Plane
|
||||||
|
|
||||||
Create the Kubernetes configuration directory:
|
Create the Kubernetes configuration directory:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
sudo mkdir -p /etc/kubernetes/config
|
mkdir -p /etc/kubernetes/config
|
||||||
```
|
```
|
||||||
|
|
||||||
### Download and Install the Kubernetes Controller Binaries
|
### Install the Kubernetes Controller Binaries
|
||||||
|
|
||||||
Download the official Kubernetes release binaries:
|
|
||||||
|
|
||||||
```
|
|
||||||
wget -q --show-progress --https-only --timestamping \
|
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver" \
|
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager" \
|
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler" \
|
|
||||||
"https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl"
|
|
||||||
```
|
|
||||||
|
|
||||||
Install the Kubernetes binaries:
|
Install the Kubernetes binaries:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
|
chmod +x kube-apiserver \
|
||||||
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
|
kube-controller-manager \
|
||||||
|
kube-scheduler kubectl
|
||||||
|
|
||||||
|
mv kube-apiserver \
|
||||||
|
kube-controller-manager \
|
||||||
|
kube-scheduler kubectl \
|
||||||
|
/usr/local/bin/
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configure the Kubernetes API Server
|
### Configure the Kubernetes API Server
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
sudo mkdir -p /var/lib/kubernetes/
|
mkdir -p /var/lib/kubernetes/
|
||||||
|
|
||||||
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
|
mv ca.crt ca.key \
|
||||||
service-account-key.pem service-account.pem \
|
kube-api-server.key kube-api-server.crt \
|
||||||
encryption-config.yaml /var/lib/kubernetes/
|
service-accounts.key service-accounts.crt \
|
||||||
|
encryption-config.yaml \
|
||||||
|
/var/lib/kubernetes/
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
|
|
||||||
|
|
||||||
```
|
|
||||||
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
|
|
||||||
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
REGION=$(curl -s -H "Metadata-Flavor: Google" \
|
|
||||||
http://metadata.google.internal/computeMetadata/v1/project/attributes/google-compute-default-region)
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
|
||||||
--region $REGION \
|
|
||||||
--format 'value(address)')
|
|
||||||
```
|
|
||||||
|
|
||||||
Create the `kube-apiserver.service` systemd unit file:
|
Create the `kube-apiserver.service` systemd unit file:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
|
mv kube-apiserver.service \
|
||||||
[Unit]
|
/etc/systemd/system/kube-apiserver.service
|
||||||
Description=Kubernetes API Server
|
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStart=/usr/local/bin/kube-apiserver \\
|
|
||||||
--advertise-address=${INTERNAL_IP} \\
|
|
||||||
--allow-privileged=true \\
|
|
||||||
--apiserver-count=3 \\
|
|
||||||
--audit-log-maxage=30 \\
|
|
||||||
--audit-log-maxbackup=3 \\
|
|
||||||
--audit-log-maxsize=100 \\
|
|
||||||
--audit-log-path=/var/log/audit.log \\
|
|
||||||
--authorization-mode=Node,RBAC \\
|
|
||||||
--bind-address=0.0.0.0 \\
|
|
||||||
--client-ca-file=/var/lib/kubernetes/ca.pem \\
|
|
||||||
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
|
|
||||||
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
|
|
||||||
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
|
|
||||||
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
|
|
||||||
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
|
|
||||||
--event-ttl=1h \\
|
|
||||||
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
|
|
||||||
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
|
|
||||||
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
|
|
||||||
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
|
|
||||||
--runtime-config='api/all=true' \\
|
|
||||||
--service-account-key-file=/var/lib/kubernetes/service-account.pem \\
|
|
||||||
--service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \\
|
|
||||||
--service-account-issuer=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \\
|
|
||||||
--service-cluster-ip-range=10.32.0.0/24 \\
|
|
||||||
--service-node-port-range=30000-32767 \\
|
|
||||||
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
|
|
||||||
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
|
|
||||||
--v=2
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configure the Kubernetes Controller Manager
|
### Configure the Kubernetes Controller Manager
|
||||||
|
|
||||||
Move the `kube-controller-manager` kubeconfig into place:
|
Move the `kube-controller-manager` kubeconfig into place:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
||||||
```
|
```
|
||||||
|
|
||||||
Create the `kube-controller-manager.service` systemd unit file:
|
Create the `kube-controller-manager.service` systemd unit file:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
|
mv kube-controller-manager.service /etc/systemd/system/
|
||||||
[Unit]
|
|
||||||
Description=Kubernetes Controller Manager
|
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStart=/usr/local/bin/kube-controller-manager \\
|
|
||||||
--bind-address=0.0.0.0 \\
|
|
||||||
--cluster-cidr=10.200.0.0/16 \\
|
|
||||||
--cluster-name=kubernetes \\
|
|
||||||
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
|
|
||||||
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
|
|
||||||
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
|
|
||||||
--leader-elect=true \\
|
|
||||||
--root-ca-file=/var/lib/kubernetes/ca.pem \\
|
|
||||||
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
|
|
||||||
--service-cluster-ip-range=10.32.0.0/24 \\
|
|
||||||
--use-service-account-credentials=true \\
|
|
||||||
--v=2
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configure the Kubernetes Scheduler
|
### Configure the Kubernetes Scheduler
|
||||||
|
|
||||||
Move the `kube-scheduler` kubeconfig into place:
|
Move the `kube-scheduler` kubeconfig into place:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
||||||
```
|
```
|
||||||
|
|
||||||
Create the `kube-scheduler.yaml` configuration file:
|
Create the `kube-scheduler.yaml` configuration file:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
|
mv kube-scheduler.yaml /etc/kubernetes/config/
|
||||||
apiVersion: kubescheduler.config.k8s.io/v1beta1
|
|
||||||
kind: KubeSchedulerConfiguration
|
|
||||||
clientConnection:
|
|
||||||
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
|
|
||||||
leaderElection:
|
|
||||||
leaderElect: true
|
|
||||||
EOF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Create the `kube-scheduler.service` systemd unit file:
|
Create the `kube-scheduler.service` systemd unit file:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
|
mv kube-scheduler.service /etc/systemd/system/
|
||||||
[Unit]
|
|
||||||
Description=Kubernetes Scheduler
|
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStart=/usr/local/bin/kube-scheduler \\
|
|
||||||
--config=/etc/kubernetes/config/kube-scheduler.yaml \\
|
|
||||||
--v=2
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Start the Controller Services
|
### Start the Controller Services
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
sudo systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
|
|
||||||
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
|
systemctl enable kube-apiserver \
|
||||||
|
kube-controller-manager kube-scheduler
|
||||||
|
|
||||||
|
systemctl start kube-apiserver \
|
||||||
|
kube-controller-manager kube-scheduler
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
|
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
|
||||||
|
|
||||||
### Enable HTTP Health Checks
|
|
||||||
|
|
||||||
A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
|
|
||||||
|
|
||||||
> The `/healthz` API server endpoint does not require authentication by default.
|
|
||||||
|
|
||||||
Install a basic web server to handle HTTP health checks:
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo apt-get update
|
|
||||||
sudo apt-get install -y nginx
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
cat > kubernetes.default.svc.cluster.local <<EOF
|
|
||||||
server {
|
|
||||||
listen 80;
|
|
||||||
server_name kubernetes.default.svc.cluster.local;
|
|
||||||
|
|
||||||
location /healthz {
|
|
||||||
proxy_pass https://127.0.0.1:6443/healthz;
|
|
||||||
proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
sudo mv kubernetes.default.svc.cluster.local \
|
|
||||||
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local
|
|
||||||
|
|
||||||
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo systemctl restart nginx
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo systemctl enable nginx
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
```
|
```bash
|
||||||
kubectl cluster-info --kubeconfig admin.kubeconfig
|
kubectl cluster-info \
|
||||||
|
--kubeconfig admin.kubeconfig
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
```text
|
||||||
Kubernetes control plane is running at https://127.0.0.1:6443
|
Kubernetes control plane is running at https://127.0.0.1:6443
|
||||||
```
|
```
|
||||||
|
|
||||||
Test the nginx HTTP health check proxy:
|
|
||||||
|
|
||||||
```
|
|
||||||
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
HTTP/1.1 200 OK
|
|
||||||
Server: nginx/1.18.0 (Ubuntu)
|
|
||||||
Date: Sun, 02 May 2021 04:19:29 GMT
|
|
||||||
Content-Type: text/plain; charset=utf-8
|
|
||||||
Content-Length: 2
|
|
||||||
Connection: keep-alive
|
|
||||||
Cache-Control: no-cache, private
|
|
||||||
X-Content-Type-Options: nosniff
|
|
||||||
X-Kubernetes-Pf-Flowschema-Uid: c43f32eb-e038-457f-9474-571d43e5c325
|
|
||||||
X-Kubernetes-Pf-Prioritylevel-Uid: 8ba5908f-5569-4330-80fd-c643e7512366
|
|
||||||
|
|
||||||
ok
|
|
||||||
```
|
|
||||||
|
|
||||||
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
|
|
||||||
|
|
||||||
## RBAC for Kubelet Authorization
|
## RBAC for Kubelet Authorization
|
||||||
|
|
||||||
In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
|
In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
|
||||||
|
|
||||||
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
|
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
|
||||||
|
|
||||||
The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
|
The commands in this section will affect the entire cluster and only need to be run on the controller node.
|
||||||
|
|
||||||
```
|
```bash
|
||||||
gcloud compute ssh controller-0
|
ssh root@server
|
||||||
```
|
```
|
||||||
|
|
||||||
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
kubectl apply -f kube-apiserver-to-kubelet.yaml \
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
--kubeconfig admin.kubeconfig
|
||||||
kind: ClusterRole
|
|
||||||
metadata:
|
|
||||||
annotations:
|
|
||||||
rbac.authorization.kubernetes.io/autoupdate: "true"
|
|
||||||
labels:
|
|
||||||
kubernetes.io/bootstrapping: rbac-defaults
|
|
||||||
name: system:kube-apiserver-to-kubelet
|
|
||||||
rules:
|
|
||||||
- apiGroups:
|
|
||||||
- ""
|
|
||||||
resources:
|
|
||||||
- nodes/proxy
|
|
||||||
- nodes/stats
|
|
||||||
- nodes/log
|
|
||||||
- nodes/spec
|
|
||||||
- nodes/metrics
|
|
||||||
verbs:
|
|
||||||
- "*"
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
|
|
||||||
|
|
||||||
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
|
|
||||||
|
|
||||||
```
|
|
||||||
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: ClusterRoleBinding
|
|
||||||
metadata:
|
|
||||||
name: system:kube-apiserver
|
|
||||||
namespace: ""
|
|
||||||
roleRef:
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: ClusterRole
|
|
||||||
name: system:kube-apiserver-to-kubelet
|
|
||||||
subjects:
|
|
||||||
- apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: User
|
|
||||||
name: kubernetes
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
## The Kubernetes Frontend Load Balancer
|
|
||||||
|
|
||||||
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer.
|
|
||||||
|
|
||||||
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
|
|
||||||
|
|
||||||
|
|
||||||
### Provision a Network Load Balancer
|
|
||||||
|
|
||||||
Create the external load balancer network resources:
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
|
||||||
--region $(gcloud config get-value compute/region) \
|
|
||||||
--format 'value(address)')
|
|
||||||
|
|
||||||
gcloud compute http-health-checks create kubernetes \
|
|
||||||
--description "Kubernetes Health Check" \
|
|
||||||
--host "kubernetes.default.svc.cluster.local" \
|
|
||||||
--request-path "/healthz"
|
|
||||||
|
|
||||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
|
|
||||||
--network kubernetes-the-hard-way \
|
|
||||||
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
|
|
||||||
--allow tcp
|
|
||||||
|
|
||||||
gcloud compute target-pools create kubernetes-target-pool \
|
|
||||||
--http-health-check kubernetes
|
|
||||||
|
|
||||||
gcloud compute target-pools add-instances kubernetes-target-pool \
|
|
||||||
--instances controller-0,controller-1,controller-2
|
|
||||||
|
|
||||||
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
|
|
||||||
--address ${KUBERNETES_PUBLIC_ADDRESS} \
|
|
||||||
--ports 6443 \
|
|
||||||
--region $(gcloud config get-value compute/region) \
|
|
||||||
--target-pool kubernetes-target-pool
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Verification
|
### Verification
|
||||||
|
|
||||||
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
|
At this point the Kubernetes control plane is up and running. Run the following commands from the `jumpbox` machine to verify it's working:
|
||||||
|
|
||||||
Retrieve the `kubernetes-the-hard-way` static IP address:
|
|
||||||
|
|
||||||
```
|
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
|
||||||
--region $(gcloud config get-value compute/region) \
|
|
||||||
--format 'value(address)')
|
|
||||||
```
|
|
||||||
|
|
||||||
Make a HTTP request for the Kubernetes version info:
|
Make a HTTP request for the Kubernetes version info:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
|
curl -k --cacert ca.crt https://server.kubernetes.local:6443/version
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
```text
|
||||||
|
|
||||||
```
|
|
||||||
{
|
{
|
||||||
"major": "1",
|
"major": "1",
|
||||||
"minor": "21",
|
"minor": "28",
|
||||||
"gitVersion": "v1.21.0",
|
"gitVersion": "v1.28.3",
|
||||||
"gitCommit": "cb303e613a121a29364f75cc67d3d580833a7479",
|
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
|
||||||
"gitTreeState": "clean",
|
"gitTreeState": "clean",
|
||||||
"buildDate": "2021-04-08T16:25:06Z",
|
"buildDate": "2023-10-18T11:33:18Z",
|
||||||
"goVersion": "go1.16.1",
|
"goVersion": "go1.20.10",
|
||||||
"compiler": "gc",
|
"compiler": "gc",
|
||||||
"platform": "linux/amd64"
|
"platform": "linux/arm64"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -1,27 +1,60 @@
|
||||||
# Bootstrapping the Kubernetes Worker Nodes
|
# Bootstrapping the Kubernetes Worker Nodes
|
||||||
|
|
||||||
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
|
In this lab you will bootstrap two Kubernetes worker nodes. The following components will be installed: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
|
Copy Kubernetes binaries and systemd unit files to each worker instance:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
gcloud compute ssh worker-0
|
for host in node-0 node-1; do
|
||||||
|
SUBNET=$(grep $host machines.txt | cut -d " " -f 4)
|
||||||
|
sed "s|SUBNET|$SUBNET|g" \
|
||||||
|
configs/10-bridge.conf > 10-bridge.conf
|
||||||
|
|
||||||
|
sed "s|SUBNET|$SUBNET|g" \
|
||||||
|
configs/kubelet-config.yaml > kubelet-config.yaml
|
||||||
|
|
||||||
|
scp 10-bridge.conf kubelet-config.yaml \
|
||||||
|
root@$host:~/
|
||||||
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
### Running commands in parallel with tmux
|
```bash
|
||||||
|
for host in node-0 node-1; do
|
||||||
|
scp \
|
||||||
|
downloads/runc.arm64 \
|
||||||
|
downloads/crictl-v1.28.0-linux-arm.tar.gz \
|
||||||
|
downloads/cni-plugins-linux-arm64-v1.3.0.tgz \
|
||||||
|
downloads/containerd-1.7.8-linux-arm64.tar.gz \
|
||||||
|
downloads/kubectl \
|
||||||
|
downloads/kubelet \
|
||||||
|
downloads/kube-proxy \
|
||||||
|
configs/99-loopback.conf \
|
||||||
|
configs/containerd-config.toml \
|
||||||
|
configs/kubelet-config.yaml \
|
||||||
|
configs/kube-proxy-config.yaml \
|
||||||
|
units/containerd.service \
|
||||||
|
units/kubelet.service \
|
||||||
|
units/kube-proxy.service \
|
||||||
|
root@$host:~/
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
|
The commands in this lab must be run on each worker instance: `node-0`, `node-1`. Login to the worker instance using the `ssh` command. Example:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh root@node-0
|
||||||
|
```
|
||||||
|
|
||||||
## Provisioning a Kubernetes Worker Node
|
## Provisioning a Kubernetes Worker Node
|
||||||
|
|
||||||
Install the OS dependencies:
|
Install the OS dependencies:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
sudo apt-get update
|
apt-get update
|
||||||
sudo apt-get -y install socat conntrack ipset
|
apt-get -y install socat conntrack ipset
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -29,39 +62,26 @@ Install the OS dependencies:
|
||||||
|
|
||||||
### Disable Swap
|
### Disable Swap
|
||||||
|
|
||||||
By default the kubelet will fail to start if [swap](https://help.ubuntu.com/community/SwapFaq) is enabled. It is [recommended](https://github.com/kubernetes/kubernetes/issues/7294) that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.
|
By default, the kubelet will fail to start if [swap](https://help.ubuntu.com/community/SwapFaq) is enabled. It is [recommended](https://github.com/kubernetes/kubernetes/issues/7294) that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.
|
||||||
|
|
||||||
Verify if swap is enabled:
|
Verify if swap is enabled:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
sudo swapon --show
|
swapon --show
|
||||||
```
|
```
|
||||||
|
|
||||||
If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
|
If output is empty then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
sudo swapoff -a
|
swapoff -a
|
||||||
```
|
```
|
||||||
|
|
||||||
> To ensure swap remains off after reboot consult your Linux distro documentation.
|
> To ensure swap remains off after reboot consult your Linux distro documentation.
|
||||||
|
|
||||||
### Download and Install Worker Binaries
|
|
||||||
|
|
||||||
```
|
|
||||||
wget -q --show-progress --https-only --timestamping \
|
|
||||||
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz \
|
|
||||||
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64 \
|
|
||||||
https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz \
|
|
||||||
https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz \
|
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
|
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy \
|
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
|
|
||||||
```
|
|
||||||
|
|
||||||
Create the installation directories:
|
Create the installation directories:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
sudo mkdir -p \
|
mkdir -p \
|
||||||
/etc/cni/net.d \
|
/etc/cni/net.d \
|
||||||
/opt/cni/bin \
|
/opt/cni/bin \
|
||||||
/var/lib/kubelet \
|
/var/lib/kubelet \
|
||||||
|
@ -72,242 +92,85 @@ sudo mkdir -p \
|
||||||
|
|
||||||
Install the worker binaries:
|
Install the worker binaries:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
mkdir containerd
|
mkdir -p containerd
|
||||||
tar -xvf crictl-v1.21.0-linux-amd64.tar.gz
|
tar -xvf crictl-v1.28.0-linux-arm.tar.gz
|
||||||
tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
|
tar -xvf containerd-1.7.8-linux-arm64.tar.gz -C containerd
|
||||||
sudo tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
|
tar -xvf cni-plugins-linux-arm64-v1.3.0.tgz -C /opt/cni/bin/
|
||||||
sudo mv runc.amd64 runc
|
mv runc.arm64 runc
|
||||||
chmod +x crictl kubectl kube-proxy kubelet runc
|
chmod +x crictl kubectl kube-proxy kubelet runc
|
||||||
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
|
mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
|
||||||
sudo mv containerd/bin/* /bin/
|
mv containerd/bin/* /bin/
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configure CNI Networking
|
### Configure CNI Networking
|
||||||
|
|
||||||
Retrieve the Pod CIDR range for the current compute instance:
|
|
||||||
|
|
||||||
```
|
|
||||||
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
|
|
||||||
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
|
|
||||||
```
|
|
||||||
|
|
||||||
Create the `bridge` network configuration file:
|
Create the `bridge` network configuration file:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
|
mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
|
||||||
{
|
|
||||||
"cniVersion": "0.4.0",
|
|
||||||
"name": "bridge",
|
|
||||||
"type": "bridge",
|
|
||||||
"bridge": "cnio0",
|
|
||||||
"isGateway": true,
|
|
||||||
"ipMasq": true,
|
|
||||||
"ipam": {
|
|
||||||
"type": "host-local",
|
|
||||||
"ranges": [
|
|
||||||
[{"subnet": "${POD_CIDR}"}]
|
|
||||||
],
|
|
||||||
"routes": [{"dst": "0.0.0.0/0"}]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
Create the `loopback` network configuration file:
|
|
||||||
|
|
||||||
```
|
|
||||||
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
|
|
||||||
{
|
|
||||||
"cniVersion": "0.4.0",
|
|
||||||
"name": "lo",
|
|
||||||
"type": "loopback"
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configure containerd
|
### Configure containerd
|
||||||
|
|
||||||
Create the `containerd` configuration file:
|
Install the `containerd` configuration files:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
sudo mkdir -p /etc/containerd/
|
{
|
||||||
```
|
mkdir -p /etc/containerd/
|
||||||
|
mv containerd-config.toml /etc/containerd/config.toml
|
||||||
```
|
mv containerd.service /etc/systemd/system/
|
||||||
cat << EOF | sudo tee /etc/containerd/config.toml
|
}
|
||||||
[plugins]
|
|
||||||
[plugins.cri.containerd]
|
|
||||||
snapshotter = "overlayfs"
|
|
||||||
[plugins.cri.containerd.default_runtime]
|
|
||||||
runtime_type = "io.containerd.runtime.v1.linux"
|
|
||||||
runtime_engine = "/usr/local/bin/runc"
|
|
||||||
runtime_root = ""
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
Create the `containerd.service` systemd unit file:
|
|
||||||
|
|
||||||
```
|
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
|
|
||||||
[Unit]
|
|
||||||
Description=containerd container runtime
|
|
||||||
Documentation=https://containerd.io
|
|
||||||
After=network.target
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStartPre=/sbin/modprobe overlay
|
|
||||||
ExecStart=/bin/containerd
|
|
||||||
Restart=always
|
|
||||||
RestartSec=5
|
|
||||||
Delegate=yes
|
|
||||||
KillMode=process
|
|
||||||
OOMScoreAdjust=-999
|
|
||||||
LimitNOFILE=1048576
|
|
||||||
LimitNPROC=infinity
|
|
||||||
LimitCORE=infinity
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configure the Kubelet
|
### Configure the Kubelet
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
|
|
||||||
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
|
|
||||||
sudo mv ca.pem /var/lib/kubernetes/
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Create the `kubelet-config.yaml` configuration file:
|
Create the `kubelet-config.yaml` configuration file:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
{
|
||||||
kind: KubeletConfiguration
|
mv kubelet-config.yaml /var/lib/kubelet/
|
||||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
mv kubelet.service /etc/systemd/system/
|
||||||
authentication:
|
}
|
||||||
anonymous:
|
|
||||||
enabled: false
|
|
||||||
webhook:
|
|
||||||
enabled: true
|
|
||||||
x509:
|
|
||||||
clientCAFile: "/var/lib/kubernetes/ca.pem"
|
|
||||||
authorization:
|
|
||||||
mode: Webhook
|
|
||||||
clusterDomain: "cluster.local"
|
|
||||||
clusterDNS:
|
|
||||||
- "10.32.0.10"
|
|
||||||
podCIDR: "${POD_CIDR}"
|
|
||||||
resolvConf: "/run/systemd/resolve/resolv.conf"
|
|
||||||
runtimeRequestTimeout: "15m"
|
|
||||||
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
|
|
||||||
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
> The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`.
|
|
||||||
|
|
||||||
Create the `kubelet.service` systemd unit file:
|
|
||||||
|
|
||||||
```
|
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
|
||||||
[Unit]
|
|
||||||
Description=Kubernetes Kubelet
|
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
|
||||||
After=containerd.service
|
|
||||||
Requires=containerd.service
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStart=/usr/local/bin/kubelet \\
|
|
||||||
--config=/var/lib/kubelet/kubelet-config.yaml \\
|
|
||||||
--container-runtime=remote \\
|
|
||||||
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
|
|
||||||
--image-pull-progress-deadline=2m \\
|
|
||||||
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
|
||||||
--network-plugin=cni \\
|
|
||||||
--register-node=true \\
|
|
||||||
--v=2
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Configure the Kubernetes Proxy
|
### Configure the Kubernetes Proxy
|
||||||
|
|
||||||
```
|
```bash
|
||||||
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
{
|
||||||
```
|
mv kube-proxy-config.yaml /var/lib/kube-proxy/
|
||||||
|
mv kube-proxy.service /etc/systemd/system/
|
||||||
Create the `kube-proxy-config.yaml` configuration file:
|
}
|
||||||
|
|
||||||
```
|
|
||||||
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
|
|
||||||
kind: KubeProxyConfiguration
|
|
||||||
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
|
||||||
clientConnection:
|
|
||||||
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
|
|
||||||
mode: "iptables"
|
|
||||||
clusterCIDR: "10.200.0.0/16"
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
Create the `kube-proxy.service` systemd unit file:
|
|
||||||
|
|
||||||
```
|
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
|
|
||||||
[Unit]
|
|
||||||
Description=Kubernetes Kube Proxy
|
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
ExecStart=/usr/local/bin/kube-proxy \\
|
|
||||||
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
|
|
||||||
Restart=on-failure
|
|
||||||
RestartSec=5
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
EOF
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Start the Worker Services
|
### Start the Worker Services
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
sudo systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
sudo systemctl enable containerd kubelet kube-proxy
|
systemctl enable containerd kubelet kube-proxy
|
||||||
sudo systemctl start containerd kubelet kube-proxy
|
systemctl start containerd kubelet kube-proxy
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
|
|
||||||
|
|
||||||
## Verification
|
## Verification
|
||||||
|
|
||||||
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
|
The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the `jumpbox` machine.
|
||||||
|
|
||||||
List the registered Kubernetes nodes:
|
List the registered Kubernetes nodes:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
gcloud compute ssh controller-0 \
|
ssh root@server \
|
||||||
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
|
"kubectl get nodes \
|
||||||
|
--kubeconfig admin.kubeconfig"
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
|
||||||
|
|
||||||
```
|
```
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
worker-0 Ready <none> 22s v1.21.0
|
node-0 Ready <none> 1m v1.28.3
|
||||||
worker-1 Ready <none> 22s v1.21.0
|
node-1 Ready <none> 10s v1.28.3
|
||||||
worker-2 Ready <none> 22s v1.21.0
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
|
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
|
||||||
|
|
|
@ -2,28 +2,45 @@
|
||||||
|
|
||||||
In this lab you will generate a kubeconfig file for the `kubectl` command line utility based on the `admin` user credentials.
|
In this lab you will generate a kubeconfig file for the `kubectl` command line utility based on the `admin` user credentials.
|
||||||
|
|
||||||
> Run the commands in this lab from the same directory used to generate the admin client certificates.
|
> Run the commands in this lab from the `jumpbox` machine.
|
||||||
|
|
||||||
## The Admin Kubernetes Configuration File
|
## The Admin Kubernetes Configuration File
|
||||||
|
|
||||||
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
|
Each kubeconfig requires a Kubernetes API Server to connect to.
|
||||||
|
|
||||||
|
You should be able to ping `server.kubernetes.local` based on the `/etc/hosts` DNS entry from a previous lap.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -k --cacert ca.crt \
|
||||||
|
https://server.kubernetes.local:6443/version
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
{
|
||||||
|
"major": "1",
|
||||||
|
"minor": "28",
|
||||||
|
"gitVersion": "v1.28.3",
|
||||||
|
"gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
|
||||||
|
"gitTreeState": "clean",
|
||||||
|
"buildDate": "2023-10-18T11:33:18Z",
|
||||||
|
"goVersion": "go1.20.10",
|
||||||
|
"compiler": "gc",
|
||||||
|
"platform": "linux/arm64"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
{
|
{
|
||||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
|
||||||
--region $(gcloud config get-value compute/region) \
|
|
||||||
--format 'value(address)')
|
|
||||||
|
|
||||||
kubectl config set-cluster kubernetes-the-hard-way \
|
kubectl config set-cluster kubernetes-the-hard-way \
|
||||||
--certificate-authority=ca.pem \
|
--certificate-authority=ca.crt \
|
||||||
--embed-certs=true \
|
--embed-certs=true \
|
||||||
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
|
--server=https://server.kubernetes.local:6443
|
||||||
|
|
||||||
kubectl config set-credentials admin \
|
kubectl config set-credentials admin \
|
||||||
--client-certificate=admin.pem \
|
--client-certificate=admin.crt \
|
||||||
--client-key=admin-key.pem
|
--client-key=admin.key
|
||||||
|
|
||||||
kubectl config set-context kubernetes-the-hard-way \
|
kubectl config set-context kubernetes-the-hard-way \
|
||||||
--cluster=kubernetes-the-hard-way \
|
--cluster=kubernetes-the-hard-way \
|
||||||
|
@ -32,35 +49,33 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
||||||
kubectl config use-context kubernetes-the-hard-way
|
kubectl config use-context kubernetes-the-hard-way
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
The results of running the command above should create a kubeconfig file in the default location `~/.kube/config` used by the `kubectl` commandline tool. This also means you can run the `kubectl` command without specifying a config.
|
||||||
|
|
||||||
|
|
||||||
## Verification
|
## Verification
|
||||||
|
|
||||||
Check the version of the remote Kubernetes cluster:
|
Check the version of the remote Kubernetes cluster:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
kubectl version
|
kubectl version
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
```text
|
||||||
|
Client Version: v1.28.3
|
||||||
```
|
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
|
||||||
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
|
Server Version: v1.28.3
|
||||||
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
List the nodes in the remote Kubernetes cluster:
|
List the nodes in the remote Kubernetes cluster:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
kubectl get nodes
|
kubectl get nodes
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
|
||||||
|
|
||||||
```
|
```
|
||||||
NAME STATUS ROLES AGE VERSION
|
NAME STATUS ROLES AGE VERSION
|
||||||
worker-0 Ready <none> 2m35s v1.21.0
|
node-0 Ready <none> 30m v1.28.3
|
||||||
worker-1 Ready <none> 2m35s v1.21.0
|
node-1 Ready <none> 35m v1.28.3
|
||||||
worker-2 Ready <none> 2m35s v1.21.0
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
|
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
|
||||||
|
|
|
@ -12,49 +12,67 @@ In this section you will gather the information required to create routes in the
|
||||||
|
|
||||||
Print the internal IP address and Pod CIDR range for each worker instance:
|
Print the internal IP address and Pod CIDR range for each worker instance:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
for instance in worker-0 worker-1 worker-2; do
|
{
|
||||||
gcloud compute instances describe ${instance} \
|
SERVER_IP=$(grep server machines.txt | cut -d " " -f 1)
|
||||||
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
|
NODE_0_IP=$(grep node-0 machines.txt | cut -d " " -f 1)
|
||||||
done
|
NODE_0_SUBNET=$(grep node-0 machines.txt | cut -d " " -f 4)
|
||||||
|
NODE_1_IP=$(grep node-1 machines.txt | cut -d " " -f 1)
|
||||||
|
NODE_1_SUBNET=$(grep node-1 machines.txt | cut -d " " -f 4)
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
```bash
|
||||||
|
ssh root@server <<EOF
|
||||||
```
|
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
|
||||||
10.240.0.20 10.200.0.0/24
|
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
|
||||||
10.240.0.21 10.200.1.0/24
|
EOF
|
||||||
10.240.0.22 10.200.2.0/24
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Routes
|
```bash
|
||||||
|
ssh root@node-0 <<EOF
|
||||||
Create network routes for each worker instance:
|
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
|
||||||
|
EOF
|
||||||
```
|
|
||||||
for i in 0 1 2; do
|
|
||||||
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
|
|
||||||
--network kubernetes-the-hard-way \
|
|
||||||
--next-hop-address 10.240.0.2${i} \
|
|
||||||
--destination-range 10.200.${i}.0/24
|
|
||||||
done
|
|
||||||
```
|
```
|
||||||
|
|
||||||
List the routes in the `kubernetes-the-hard-way` VPC network:
|
```bash
|
||||||
|
ssh root@node-1 <<EOF
|
||||||
```
|
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
|
||||||
gcloud compute routes list --filter "network: kubernetes-the-hard-way"
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
## Verification
|
||||||
|
|
||||||
```
|
```bash
|
||||||
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
|
ssh root@server ip route
|
||||||
default-route-1606ba68df692422 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 0
|
|
||||||
default-route-615e3652a8b74e4d kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
|
|
||||||
kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
|
|
||||||
kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000
|
|
||||||
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)
|
```text
|
||||||
|
default via XXX.XXX.XXX.XXX dev ens160
|
||||||
|
10.200.0.0/24 via XXX.XXX.XXX.XXX dev ens160
|
||||||
|
10.200.1.0/24 via XXX.XXX.XXX.XXX dev ens160
|
||||||
|
XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh root@node-0 ip route
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
default via XXX.XXX.XXX.XXX dev ens160
|
||||||
|
10.200.1.0/24 via XXX.XXX.XXX.XXX dev ens160
|
||||||
|
XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh root@node-1 ip route
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
default via XXX.XXX.XXX.XXX dev ens160
|
||||||
|
10.200.0.0/24 via XXX.XXX.XXX.XXX dev ens160
|
||||||
|
XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
Next: [Smoke Test](12-smoke-test.md)
|
||||||
|
|
|
@ -1,81 +0,0 @@
|
||||||
# Deploying the DNS Cluster Add-on
|
|
||||||
|
|
||||||
In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery, backed by [CoreDNS](https://coredns.io/), to applications running inside the Kubernetes cluster.
|
|
||||||
|
|
||||||
## The DNS Cluster Add-on
|
|
||||||
|
|
||||||
Deploy the `coredns` cluster add-on:
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
> output
|
|
||||||
|
|
||||||
```
|
|
||||||
serviceaccount/coredns created
|
|
||||||
clusterrole.rbac.authorization.k8s.io/system:coredns created
|
|
||||||
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
|
|
||||||
configmap/coredns created
|
|
||||||
deployment.apps/coredns created
|
|
||||||
service/kube-dns created
|
|
||||||
```
|
|
||||||
|
|
||||||
List the pods created by the `kube-dns` deployment:
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl get pods -l k8s-app=kube-dns -n kube-system
|
|
||||||
```
|
|
||||||
|
|
||||||
> output
|
|
||||||
|
|
||||||
```
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
coredns-8494f9c688-hh7r2 1/1 Running 0 10s
|
|
||||||
coredns-8494f9c688-zqrj2 1/1 Running 0 10s
|
|
||||||
```
|
|
||||||
|
|
||||||
## Verification
|
|
||||||
|
|
||||||
Create a `busybox` deployment:
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
|
|
||||||
```
|
|
||||||
|
|
||||||
List the pod created by the `busybox` deployment:
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl get pods -l run=busybox
|
|
||||||
```
|
|
||||||
|
|
||||||
> output
|
|
||||||
|
|
||||||
```
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
busybox 1/1 Running 0 3s
|
|
||||||
```
|
|
||||||
|
|
||||||
Retrieve the full name of the `busybox` pod:
|
|
||||||
|
|
||||||
```
|
|
||||||
POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
|
|
||||||
```
|
|
||||||
|
|
||||||
Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
|
|
||||||
|
|
||||||
```
|
|
||||||
kubectl exec -ti $POD_NAME -- nslookup kubernetes
|
|
||||||
```
|
|
||||||
|
|
||||||
> output
|
|
||||||
|
|
||||||
```
|
|
||||||
Server: 10.32.0.10
|
|
||||||
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
|
|
||||||
|
|
||||||
Name: kubernetes
|
|
||||||
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
|
|
||||||
```
|
|
||||||
|
|
||||||
Next: [Smoke Test](13-smoke-test.md)
|
|
|
@ -8,48 +8,41 @@ In this section you will verify the ability to [encrypt secret data at rest](htt
|
||||||
|
|
||||||
Create a generic secret:
|
Create a generic secret:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
kubectl create secret generic kubernetes-the-hard-way \
|
kubectl create secret generic kubernetes-the-hard-way \
|
||||||
--from-literal="mykey=mydata"
|
--from-literal="mykey=mydata"
|
||||||
```
|
```
|
||||||
|
|
||||||
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
|
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
gcloud compute ssh controller-0 \
|
ssh root@server \
|
||||||
--command "sudo ETCDCTL_API=3 etcdctl get \
|
'etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C'
|
||||||
--endpoints=https://127.0.0.1:2379 \
|
|
||||||
--cacert=/etc/etcd/ca.pem \
|
|
||||||
--cert=/etc/etcd/kubernetes.pem \
|
|
||||||
--key=/etc/etcd/kubernetes-key.pem\
|
|
||||||
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
```text
|
||||||
|
|
||||||
```
|
|
||||||
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
|
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
|
||||||
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
|
||||||
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
|
||||||
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
|
||||||
00000040 3a 76 31 3a 6b 65 79 31 3a 97 d1 2c cd 89 0d 08 |:v1:key1:..,....|
|
00000040 3a 76 31 3a 6b 65 79 31 3a 9b 79 a5 b9 49 a2 77 |:v1:key1:.y..I.w|
|
||||||
00000050 29 3c 7d 19 41 cb ea d7 3d 50 45 88 82 a3 1f 11 |)<}.A...=PE.....|
|
00000050 c0 6a c9 12 7c b4 c7 c4 64 41 37 97 4a 83 a9 c1 |.j..|...dA7.J...|
|
||||||
00000060 26 cb 43 2e c8 cf 73 7d 34 7e b1 7f 9f 71 d2 51 |&.C...s}4~...q.Q|
|
00000060 4f 14 ae 73 ab b8 38 26 11 14 0a 40 b8 f3 0e 0a |O..s..8&...@....|
|
||||||
00000070 45 05 16 e9 07 d4 62 af f8 2e 6d 4a cf c8 e8 75 |E.....b...mJ...u|
|
00000070 f5 a7 a2 2c b6 35 b1 83 22 15 aa d0 dd 25 11 3e |...,.5.."....%.>|
|
||||||
00000080 6b 75 1e b7 64 db 7d 7f fd f3 96 62 e2 a7 ce 22 |ku..d.}....b..."|
|
00000080 c4 e9 69 1c 10 7a 9d f7 dc 22 28 89 2c 83 dd 0b |..i..z..."(.,...|
|
||||||
00000090 2b 2a 82 01 c3 f5 83 ae 12 8b d5 1d 2e e6 a9 90 |+*..............|
|
00000090 a4 5f 3a 93 0f ff 1f f8 bc 97 43 0e e5 05 5d f9 |._:.......C...].|
|
||||||
000000a0 bd f0 23 6c 0c 55 e2 52 18 78 fe bf 6d 76 ea 98 |..#l.U.R.x..mv..|
|
000000a0 ef 88 02 80 49 81 f1 58 b0 48 39 19 14 e1 b1 34 |....I..X.H9....4|
|
||||||
000000b0 fc 2c 17 36 e3 40 87 15 25 13 be d6 04 88 68 5b |.,.6.@..%.....h[|
|
000000b0 f6 b0 9b 0a 9c 53 27 2b 23 b9 e6 52 b4 96 81 70 |.....S'+#..R...p|
|
||||||
000000c0 a4 16 81 f6 8e 3b 10 46 cb 2c ba 21 35 0c 5b 49 |.....;.F.,.!5.[I|
|
000000c0 a7 b6 7b 4f 44 d4 9c 07 51 a3 1b 22 96 4c 24 6c |..{OD...Q..".L$l|
|
||||||
000000d0 e5 27 20 4c b3 8e 6b d0 91 c2 28 f1 cc fa 6a 1b |.' L..k...(...j.|
|
000000d0 44 6c db 53 f5 31 e6 3f 15 7b 4c 23 06 c1 37 73 |Dl.S.1.?.{L#..7s|
|
||||||
000000e0 31 19 74 e7 a5 66 6a 99 1c 84 c7 e0 b0 fc 32 86 |1.t..fj.......2.|
|
000000e0 e1 97 8e 4e 1a 2e 2c 1a da 85 c3 ff 42 92 d0 f1 |...N..,.....B...|
|
||||||
000000f0 f3 29 5a a4 1c d5 a4 e3 63 26 90 95 1e 27 d0 14 |.)Z.....c&...'..|
|
000000f0 87 b8 39 89 e8 46 2e b3 56 68 41 b8 1e 29 3d ba |..9..F..VhA..)=.|
|
||||||
00000100 94 f0 ac 1a cd 0d b9 4b ae 32 02 a0 f8 b7 3f 0b |.......K.2....?.|
|
00000100 dd d8 27 4c 7f d5 fe 97 3c a3 92 e9 3d ae 47 ee |..'L....<...=.G.|
|
||||||
00000110 6f ad 1f 4d 15 8a d6 68 95 63 cf 7d 04 9a 52 71 |o..M...h.c.}..Rq|
|
00000110 24 6a 0b 7c ac b8 28 e6 25 a6 ce 04 80 ee c2 eb |$j.|..(.%.......|
|
||||||
00000120 75 ff 87 6b c5 42 e1 72 27 b5 e9 1a fe e8 c0 3f |u..k.B.r'......?|
|
00000120 4c 86 fa 70 66 13 63 59 03 c2 70 57 8b fb a1 d6 |L..pf.cY..pW....|
|
||||||
00000130 d9 04 5e eb 5d 43 0d 90 ce fa 04 a8 4a b0 aa 01 |..^.]C......J...|
|
00000130 f2 58 08 84 43 f3 70 7f ad d8 30 63 3e ef ff b6 |.X..C.p...0c>...|
|
||||||
00000140 cf 6d 5b 80 70 5b 99 3c d6 5c c0 dc d1 f5 52 4a |.m[.p[.<.\....RJ|
|
00000140 b2 06 c3 45 c5 d8 89 d3 47 4a 72 ca 20 9b cf b5 |...E....GJr. ...|
|
||||||
00000150 2c 2d 28 5a 63 57 8e 4f df 0a |,-(ZcW.O..|
|
00000150 4b 3d 6d b4 58 ae 42 4b 7f 0a |K=m.X.BK..|
|
||||||
0000015a
|
0000015a
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -61,21 +54,20 @@ In this section you will verify the ability to create and manage [Deployments](h
|
||||||
|
|
||||||
Create a deployment for the [nginx](https://nginx.org/en/) web server:
|
Create a deployment for the [nginx](https://nginx.org/en/) web server:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
kubectl create deployment nginx --image=nginx
|
kubectl create deployment nginx \
|
||||||
|
--image=nginx:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
List the pod created by the `nginx` deployment:
|
List the pod created by the `nginx` deployment:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
kubectl get pods -l app=nginx
|
kubectl get pods -l app=nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
```bash
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
```
|
nginx-56fcf95486-c8dnx 1/1 Running 0 8s
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
nginx-f89759699-kpn5m 1/1 Running 0 10s
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Port Forwarding
|
### Port Forwarding
|
||||||
|
@ -84,46 +76,44 @@ In this section you will verify the ability to access applications remotely usin
|
||||||
|
|
||||||
Retrieve the full name of the `nginx` pod:
|
Retrieve the full name of the `nginx` pod:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
|
POD_NAME=$(kubectl get pods -l app=nginx \
|
||||||
|
-o jsonpath="{.items[0].metadata.name}")
|
||||||
```
|
```
|
||||||
|
|
||||||
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
|
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
kubectl port-forward $POD_NAME 8080:80
|
kubectl port-forward $POD_NAME 8080:80
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
```text
|
||||||
|
|
||||||
```
|
|
||||||
Forwarding from 127.0.0.1:8080 -> 80
|
Forwarding from 127.0.0.1:8080 -> 80
|
||||||
Forwarding from [::1]:8080 -> 80
|
Forwarding from [::1]:8080 -> 80
|
||||||
```
|
```
|
||||||
|
|
||||||
In a new terminal make an HTTP request using the forwarding address:
|
In a new terminal make an HTTP request using the forwarding address:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
curl --head http://127.0.0.1:8080
|
curl --head http://127.0.0.1:8080
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
```text
|
||||||
|
|
||||||
```
|
|
||||||
HTTP/1.1 200 OK
|
HTTP/1.1 200 OK
|
||||||
Server: nginx/1.19.10
|
Server: nginx/1.25.3
|
||||||
Date: Sun, 02 May 2021 05:29:25 GMT
|
Date: Sun, 29 Oct 2023 01:44:32 GMT
|
||||||
Content-Type: text/html
|
Content-Type: text/html
|
||||||
Content-Length: 612
|
Content-Length: 615
|
||||||
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
|
Last-Modified: Tue, 24 Oct 2023 13:46:47 GMT
|
||||||
Connection: keep-alive
|
Connection: keep-alive
|
||||||
ETag: "6075b537-264"
|
ETag: "6537cac7-267"
|
||||||
Accept-Ranges: bytes
|
Accept-Ranges: bytes
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
|
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
|
||||||
|
|
||||||
```
|
```text
|
||||||
Forwarding from 127.0.0.1:8080 -> 80
|
Forwarding from 127.0.0.1:8080 -> 80
|
||||||
Forwarding from [::1]:8080 -> 80
|
Forwarding from [::1]:8080 -> 80
|
||||||
Handling connection for 8080
|
Handling connection for 8080
|
||||||
|
@ -136,15 +126,13 @@ In this section you will verify the ability to [retrieve container logs](https:/
|
||||||
|
|
||||||
Print the `nginx` pod logs:
|
Print the `nginx` pod logs:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
kubectl logs $POD_NAME
|
kubectl logs $POD_NAME
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
```text
|
||||||
|
|
||||||
```
|
|
||||||
...
|
...
|
||||||
127.0.0.1 - - [02/May/2021:05:29:25 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-"
|
127.0.0.1 - - [01/Nov/2023:06:10:17 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.88.1" "-"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Exec
|
### Exec
|
||||||
|
@ -153,14 +141,12 @@ In this section you will verify the ability to [execute commands in a container]
|
||||||
|
|
||||||
Print the nginx version by executing the `nginx -v` command in the `nginx` container:
|
Print the nginx version by executing the `nginx -v` command in the `nginx` container:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
kubectl exec -ti $POD_NAME -- nginx -v
|
kubectl exec -ti $POD_NAME -- nginx -v
|
||||||
```
|
```
|
||||||
|
|
||||||
> output
|
```text
|
||||||
|
nginx version: nginx/1.25.3
|
||||||
```
|
|
||||||
nginx version: nginx/1.19.10
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Services
|
## Services
|
||||||
|
@ -169,52 +155,36 @@ In this section you will verify the ability to expose applications using a [Serv
|
||||||
|
|
||||||
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
|
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
kubectl expose deployment nginx --port 80 --type NodePort
|
kubectl expose deployment nginx \
|
||||||
|
--port 80 --type NodePort
|
||||||
```
|
```
|
||||||
|
|
||||||
> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
|
> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
|
||||||
|
|
||||||
Retrieve the node port assigned to the `nginx` service:
|
Retrieve the node port assigned to the `nginx` service:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
NODE_PORT=$(kubectl get svc nginx \
|
NODE_PORT=$(kubectl get svc nginx \
|
||||||
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
|
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
|
||||||
```
|
```
|
||||||
|
|
||||||
Create a firewall rule that allows remote access to the `nginx` node port:
|
Make an HTTP request using the IP address and the `nginx` node port:
|
||||||
|
|
||||||
```
|
```bash
|
||||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
|
curl -I http://node-0:${NODE_PORT}
|
||||||
--allow=tcp:${NODE_PORT} \
|
|
||||||
--network kubernetes-the-hard-way
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Retrieve the external IP address of a worker instance:
|
```text
|
||||||
|
|
||||||
```
|
|
||||||
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
|
|
||||||
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
|
|
||||||
```
|
|
||||||
|
|
||||||
Make an HTTP request using the external IP address and the `nginx` node port:
|
|
||||||
|
|
||||||
```
|
|
||||||
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
|
|
||||||
```
|
|
||||||
|
|
||||||
> output
|
|
||||||
|
|
||||||
```
|
|
||||||
HTTP/1.1 200 OK
|
HTTP/1.1 200 OK
|
||||||
Server: nginx/1.19.10
|
Server: nginx/1.25.3
|
||||||
Date: Sun, 02 May 2021 05:31:52 GMT
|
Date: Sun, 29 Oct 2023 05:11:15 GMT
|
||||||
Content-Type: text/html
|
Content-Type: text/html
|
||||||
Content-Length: 612
|
Content-Length: 615
|
||||||
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
|
Last-Modified: Tue, 24 Oct 2023 13:46:47 GMT
|
||||||
Connection: keep-alive
|
Connection: keep-alive
|
||||||
ETag: "6075b537-264"
|
ETag: "6537cac7-267"
|
||||||
Accept-Ranges: bytes
|
Accept-Ranges: bytes
|
||||||
```
|
```
|
||||||
|
|
||||||
Next: [Cleaning Up](14-cleanup.md)
|
Next: [Cleaning Up](13-cleanup.md)
|
|
@ -0,0 +1,7 @@
|
||||||
|
# Cleaning Up
|
||||||
|
|
||||||
|
In this lab you will delete the compute resources created during this tutorial.
|
||||||
|
|
||||||
|
## Compute Instances
|
||||||
|
|
||||||
|
Delete the controller and worker compute instances.
|
|
@ -1,63 +0,0 @@
|
||||||
# Cleaning Up
|
|
||||||
|
|
||||||
In this lab you will delete the compute resources created during this tutorial.
|
|
||||||
|
|
||||||
## Compute Instances
|
|
||||||
|
|
||||||
Delete the controller and worker compute instances:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud -q compute instances delete \
|
|
||||||
controller-0 controller-1 controller-2 \
|
|
||||||
worker-0 worker-1 worker-2 \
|
|
||||||
--zone $(gcloud config get-value compute/zone)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Networking
|
|
||||||
|
|
||||||
Delete the external load balancer network resources:
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
|
|
||||||
--region $(gcloud config get-value compute/region)
|
|
||||||
|
|
||||||
gcloud -q compute target-pools delete kubernetes-target-pool
|
|
||||||
|
|
||||||
gcloud -q compute http-health-checks delete kubernetes
|
|
||||||
|
|
||||||
gcloud -q compute addresses delete kubernetes-the-hard-way
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Delete the `kubernetes-the-hard-way` firewall rules:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud -q compute firewall-rules delete \
|
|
||||||
kubernetes-the-hard-way-allow-nginx-service \
|
|
||||||
kubernetes-the-hard-way-allow-internal \
|
|
||||||
kubernetes-the-hard-way-allow-external \
|
|
||||||
kubernetes-the-hard-way-allow-health-check
|
|
||||||
```
|
|
||||||
|
|
||||||
Delete the `kubernetes-the-hard-way` network VPC:
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
gcloud -q compute routes delete \
|
|
||||||
kubernetes-route-10-200-0-0-24 \
|
|
||||||
kubernetes-route-10-200-1-0-24 \
|
|
||||||
kubernetes-route-10-200-2-0-24
|
|
||||||
|
|
||||||
gcloud -q compute networks subnets delete kubernetes
|
|
||||||
|
|
||||||
gcloud -q compute networks delete kubernetes-the-hard-way
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Delete the `kubernetes-the-hard-way` compute address:
|
|
||||||
|
|
||||||
```
|
|
||||||
gcloud -q compute addresses delete kubernetes-the-hard-way \
|
|
||||||
--region $(gcloud config get-value compute/region)
|
|
||||||
```
|
|
Binary file not shown.
Before Width: | Height: | Size: 116 KiB |
|
@ -0,0 +1,11 @@
|
||||||
|
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kubectl
|
||||||
|
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-apiserver
|
||||||
|
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-controller-manager
|
||||||
|
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-scheduler
|
||||||
|
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.28.0/crictl-v1.28.0-linux-arm.tar.gz
|
||||||
|
https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.arm64
|
||||||
|
https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz
|
||||||
|
https://github.com/containerd/containerd/releases/download/v1.7.8/containerd-1.7.8-linux-arm64.tar.gz
|
||||||
|
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kube-proxy
|
||||||
|
https://storage.googleapis.com/kubernetes-release/release/v1.28.3/bin/linux/arm64/kubelet
|
||||||
|
https://github.com/etcd-io/etcd/releases/download/v3.4.27/etcd-v3.4.27-linux-arm64.tar.gz
|
|
@ -0,0 +1,19 @@
|
||||||
|
[Unit]
|
||||||
|
Description=containerd container runtime
|
||||||
|
Documentation=https://containerd.io
|
||||||
|
After=network.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStartPre=/sbin/modprobe overlay
|
||||||
|
ExecStart=/bin/containerd
|
||||||
|
Restart=always
|
||||||
|
RestartSec=5
|
||||||
|
Delegate=yes
|
||||||
|
KillMode=process
|
||||||
|
OOMScoreAdjust=-999
|
||||||
|
LimitNOFILE=1048576
|
||||||
|
LimitNPROC=infinity
|
||||||
|
LimitCORE=infinity
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1,22 @@
|
||||||
|
[Unit]
|
||||||
|
Description=etcd
|
||||||
|
Documentation=https://github.com/etcd-io/etcd
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=notify
|
||||||
|
Environment="ETCD_UNSUPPORTED_ARCH=arm64"
|
||||||
|
ExecStart=/usr/local/bin/etcd \
|
||||||
|
--name controller \
|
||||||
|
--initial-advertise-peer-urls http://127.0.0.1:2380 \
|
||||||
|
--listen-peer-urls http://127.0.0.1:2380 \
|
||||||
|
--listen-client-urls http://127.0.0.1:2379 \
|
||||||
|
--advertise-client-urls http://127.0.0.1:2379 \
|
||||||
|
--initial-cluster-token etcd-cluster-0 \
|
||||||
|
--initial-cluster controller=http://127.0.0.1:2380 \
|
||||||
|
--initial-cluster-state new \
|
||||||
|
--data-dir=/var/lib/etcd
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1,36 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Kubernetes API Server
|
||||||
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/usr/local/bin/kube-apiserver \
|
||||||
|
--allow-privileged=true \
|
||||||
|
--apiserver-count=1 \
|
||||||
|
--audit-log-maxage=30 \
|
||||||
|
--audit-log-maxbackup=3 \
|
||||||
|
--audit-log-maxsize=100 \
|
||||||
|
--audit-log-path=/var/log/audit.log \
|
||||||
|
--authorization-mode=Node,RBAC \
|
||||||
|
--bind-address=0.0.0.0 \
|
||||||
|
--client-ca-file=/var/lib/kubernetes/ca.crt \
|
||||||
|
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
|
||||||
|
--etcd-servers=http://127.0.0.1:2379 \
|
||||||
|
--event-ttl=1h \
|
||||||
|
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
|
||||||
|
--kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \
|
||||||
|
--kubelet-client-certificate=/var/lib/kubernetes/kube-api-server.crt \
|
||||||
|
--kubelet-client-key=/var/lib/kubernetes/kube-api-server.key \
|
||||||
|
--runtime-config='api/all=true' \
|
||||||
|
--service-account-key-file=/var/lib/kubernetes/service-accounts.crt \
|
||||||
|
--service-account-signing-key-file=/var/lib/kubernetes/service-accounts.key \
|
||||||
|
--service-account-issuer=https://server.kubernetes.local:6443 \
|
||||||
|
--service-cluster-ip-range=10.32.0.0/24 \
|
||||||
|
--service-node-port-range=30000-32767 \
|
||||||
|
--tls-cert-file=/var/lib/kubernetes/kube-api-server.crt \
|
||||||
|
--tls-private-key-file=/var/lib/kubernetes/kube-api-server.key \
|
||||||
|
--v=2
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1,22 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Kubernetes Controller Manager
|
||||||
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/usr/local/bin/kube-controller-manager \
|
||||||
|
--bind-address=0.0.0.0 \
|
||||||
|
--cluster-cidr=10.200.0.0/16 \
|
||||||
|
--cluster-name=kubernetes \
|
||||||
|
--cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \
|
||||||
|
--cluster-signing-key-file=/var/lib/kubernetes/ca.key \
|
||||||
|
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
|
||||||
|
--root-ca-file=/var/lib/kubernetes/ca.crt \
|
||||||
|
--service-account-private-key-file=/var/lib/kubernetes/service-accounts.key \
|
||||||
|
--service-cluster-ip-range=10.32.0.0/24 \
|
||||||
|
--use-service-account-credentials=true \
|
||||||
|
--v=2
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1,12 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Kubernetes Kube Proxy
|
||||||
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/usr/local/bin/kube-proxy \
|
||||||
|
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1,13 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Kubernetes Scheduler
|
||||||
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/usr/local/bin/kube-scheduler \
|
||||||
|
--config=/etc/kubernetes/config/kube-scheduler.yaml \
|
||||||
|
--v=2
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
|
@ -0,0 +1,17 @@
|
||||||
|
[Unit]
|
||||||
|
Description=Kubernetes Kubelet
|
||||||
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
After=containerd.service
|
||||||
|
Requires=containerd.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
ExecStart=/usr/local/bin/kubelet \
|
||||||
|
--config=/var/lib/kubelet/kubelet-config.yaml \
|
||||||
|
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||||
|
--register-node=true \
|
||||||
|
--v=2
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
Loading…
Reference in New Issue