mirror of
https://github.com/kelseyhightower/kubernetes-the-hard-way.git
synced 2025-07-29 15:13:53 +03:00
Compare commits
23 Commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
52eb26dad1 | ||
![]() |
b2bf9fb2f6 | ||
![]() |
d1f7e159e1 | ||
![]() |
b6e493e463 | ||
![]() |
f377d2ad74 | ||
![]() |
86d51471b4 | ||
![]() |
ea9178edae | ||
![]() |
c9690e523a | ||
![]() |
08b198f2a0 | ||
![]() |
5a325c23d7 | ||
![]() |
a9cb5f7ba5 | ||
![]() |
79a3f79b27 | ||
![]() |
ca96371e4d | ||
![]() |
5c462220b7 | ||
![]() |
bf2850974e | ||
![]() |
b974042d95 | ||
![]() |
4f5cecb5ed | ||
![]() |
68ebb95898 | ||
![]() |
8db0280ef6 | ||
![]() |
a5c83f1cee | ||
![]() |
e2dbdaa66b | ||
![]() |
05ca2d04cf | ||
![]() |
f46642a6ba |
19
.gitignore
vendored
19
.gitignore
vendored
@@ -2,12 +2,23 @@ admin-csr.json
|
||||
admin-key.pem
|
||||
admin.csr
|
||||
admin.pem
|
||||
admin.kubeconfig
|
||||
ca-config.json
|
||||
ca-csr.json
|
||||
ca-key.pem
|
||||
ca.csr
|
||||
ca.pem
|
||||
encryption-config.yaml
|
||||
/encryption-config.yaml
|
||||
kube-controller-manager-csr.json
|
||||
kube-controller-manager-key.pem
|
||||
kube-controller-manager.csr
|
||||
kube-controller-manager.kubeconfig
|
||||
kube-controller-manager.pem
|
||||
kube-scheduler-csr.json
|
||||
kube-scheduler-key.pem
|
||||
kube-scheduler.csr
|
||||
kube-scheduler.kubeconfig
|
||||
kube-scheduler.pem
|
||||
kube-proxy-csr.json
|
||||
kube-proxy-key.pem
|
||||
kube-proxy.csr
|
||||
@@ -32,3 +43,9 @@ worker-2-key.pem
|
||||
worker-2.csr
|
||||
worker-2.kubeconfig
|
||||
worker-2.pem
|
||||
service-account-key.pem
|
||||
service-account.csr
|
||||
service-account.pem
|
||||
service-account-csr.json
|
||||
*.swp
|
||||
.idea/
|
||||
|
18
CONTRIBUTING.md
Normal file
18
CONTRIBUTING.md
Normal file
@@ -0,0 +1,18 @@
|
||||
This project is made possible by contributors like YOU! While all contributions are welcomed, please be sure and follow the following suggestions to help your PR get merged.
|
||||
|
||||
## License
|
||||
|
||||
This project uses an [Apache license](LICENSE). Be sure you're comfortable with the implications of that before working up a patch.
|
||||
|
||||
## Review and merge process
|
||||
|
||||
Review and merge duties are managed by [@kelseyhightower](https://github.com/kelseyhightower). Expect some burden of proof for demonstrating the marginal value of adding new content to the tutorial.
|
||||
|
||||
Here are some examples of the review and justification process:
|
||||
- [#208](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/208)
|
||||
- [#282](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/282)
|
||||
|
||||
## Notes on minutiae
|
||||
|
||||
If you find a bug that breaks the guide, please do submit it. If you are considering a minor copy edit for tone, grammar, or simple inconsistent whitespace, consider the tradeoff between maintainer time and community benefit before investing too much of your time.
|
||||
|
3
COPYRIGHT.md
Normal file
3
COPYRIGHT.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Copyright
|
||||
|
||||
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>
|
32
README.md
32
README.md
@@ -1,30 +1,35 @@
|
||||
# Kubernetes The Hard Way
|
||||
|
||||
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](http://kubernetes.io/docs/getting-started-guides/).
|
||||
|
||||
Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
|
||||
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for someone looking for a fully automated tool to bring up a Kubernetes cluster. Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
|
||||
|
||||
> The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!
|
||||
|
||||
## Copyright
|
||||
|
||||
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.
|
||||
|
||||
|
||||
## Target Audience
|
||||
|
||||
The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.
|
||||
The target audience for this tutorial is someone who wants to understand the fundamentals of Kubernetes and how the core components fit together.
|
||||
|
||||
## Cluster Details
|
||||
|
||||
Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
|
||||
Kubernetes The Hard Way guides you through bootstrapping a basic Kubernetes cluster with all control plane components running on a single node, and two worker nodes, which is enough to learn the core concepts.
|
||||
|
||||
* [Kubernetes](https://github.com/kubernetes/kubernetes) 1.9.0
|
||||
* [cri-containerd Container Runtime](https://github.com/kubernetes-incubator/cri-containerd) 1.0.0-beta.0
|
||||
* [CNI Container Networking](https://github.com/containernetworking/cni) 0.6.0
|
||||
* [etcd](https://github.com/coreos/etcd) 3.2.11
|
||||
Component versions:
|
||||
|
||||
* [kubernetes](https://github.com/kubernetes/kubernetes) v1.32.x
|
||||
* [containerd](https://github.com/containerd/containerd) v2.1.x
|
||||
* [cni](https://github.com/containernetworking/cni) v1.6.x
|
||||
* [etcd](https://github.com/etcd-io/etcd) v3.6.x
|
||||
|
||||
## Labs
|
||||
|
||||
This tutorial assumes you have access to the [Google Cloud Platform](https://cloud.google.com). While GCP is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms.
|
||||
This tutorial requires four (4) ARM64 or AMD64 based virtual or physical machines connected to the same network.
|
||||
|
||||
* [Prerequisites](docs/01-prerequisites.md)
|
||||
* [Installing the Client Tools](docs/02-client-tools.md)
|
||||
* [Setting up the Jumpbox](docs/02-jumpbox.md)
|
||||
* [Provisioning Compute Resources](docs/03-compute-resources.md)
|
||||
* [Provisioning the CA and Generating TLS Certificates](docs/04-certificate-authority.md)
|
||||
* [Generating Kubernetes Configuration Files for Authentication](docs/05-kubernetes-configuration-files.md)
|
||||
@@ -34,6 +39,5 @@ This tutorial assumes you have access to the [Google Cloud Platform](https://clo
|
||||
* [Bootstrapping the Kubernetes Worker Nodes](docs/09-bootstrapping-kubernetes-workers.md)
|
||||
* [Configuring kubectl for Remote Access](docs/10-configuring-kubectl.md)
|
||||
* [Provisioning Pod Network Routes](docs/11-pod-network-routes.md)
|
||||
* [Deploying the DNS Cluster Add-on](docs/12-dns-addon.md)
|
||||
* [Smoke Test](docs/13-smoke-test.md)
|
||||
* [Cleaning Up](docs/14-cleanup.md)
|
||||
* [Smoke Test](docs/12-smoke-test.md)
|
||||
* [Cleaning Up](docs/13-cleanup.md)
|
||||
|
206
ca.conf
Normal file
206
ca.conf
Normal file
@@ -0,0 +1,206 @@
|
||||
[req]
|
||||
distinguished_name = req_distinguished_name
|
||||
prompt = no
|
||||
x509_extensions = ca_x509_extensions
|
||||
|
||||
[ca_x509_extensions]
|
||||
basicConstraints = CA:TRUE
|
||||
keyUsage = cRLSign, keyCertSign
|
||||
|
||||
[req_distinguished_name]
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
CN = CA
|
||||
|
||||
[admin]
|
||||
distinguished_name = admin_distinguished_name
|
||||
prompt = no
|
||||
req_extensions = default_req_extensions
|
||||
|
||||
[admin_distinguished_name]
|
||||
CN = admin
|
||||
O = system:masters
|
||||
|
||||
# Service Accounts
|
||||
#
|
||||
# The Kubernetes Controller Manager leverages a key pair to generate
|
||||
# and sign service account tokens as described in the
|
||||
# [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/)
|
||||
# documentation.
|
||||
|
||||
[service-accounts]
|
||||
distinguished_name = service-accounts_distinguished_name
|
||||
prompt = no
|
||||
req_extensions = default_req_extensions
|
||||
|
||||
[service-accounts_distinguished_name]
|
||||
CN = service-accounts
|
||||
|
||||
# Worker Nodes
|
||||
#
|
||||
# Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/admin/authorization/node/)
|
||||
# called Node Authorizer, that specifically authorizes API requests made
|
||||
# by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet).
|
||||
# In order to be authorized by the Node Authorizer, Kubelets must use a credential
|
||||
# that identifies them as being in the `system:nodes` group, with a username
|
||||
# of `system:node:<nodeName>`.
|
||||
|
||||
[node-0]
|
||||
distinguished_name = node-0_distinguished_name
|
||||
prompt = no
|
||||
req_extensions = node-0_req_extensions
|
||||
|
||||
[node-0_req_extensions]
|
||||
basicConstraints = CA:FALSE
|
||||
extendedKeyUsage = clientAuth, serverAuth
|
||||
keyUsage = critical, digitalSignature, keyEncipherment
|
||||
nsCertType = client
|
||||
nsComment = "Node-0 Certificate"
|
||||
subjectAltName = DNS:node-0, IP:127.0.0.1
|
||||
subjectKeyIdentifier = hash
|
||||
|
||||
[node-0_distinguished_name]
|
||||
CN = system:node:node-0
|
||||
O = system:nodes
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
|
||||
[node-1]
|
||||
distinguished_name = node-1_distinguished_name
|
||||
prompt = no
|
||||
req_extensions = node-1_req_extensions
|
||||
|
||||
[node-1_req_extensions]
|
||||
basicConstraints = CA:FALSE
|
||||
extendedKeyUsage = clientAuth, serverAuth
|
||||
keyUsage = critical, digitalSignature, keyEncipherment
|
||||
nsCertType = client
|
||||
nsComment = "Node-1 Certificate"
|
||||
subjectAltName = DNS:node-1, IP:127.0.0.1
|
||||
subjectKeyIdentifier = hash
|
||||
|
||||
[node-1_distinguished_name]
|
||||
CN = system:node:node-1
|
||||
O = system:nodes
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
|
||||
|
||||
# Kube Proxy Section
|
||||
[kube-proxy]
|
||||
distinguished_name = kube-proxy_distinguished_name
|
||||
prompt = no
|
||||
req_extensions = kube-proxy_req_extensions
|
||||
|
||||
[kube-proxy_req_extensions]
|
||||
basicConstraints = CA:FALSE
|
||||
extendedKeyUsage = clientAuth, serverAuth
|
||||
keyUsage = critical, digitalSignature, keyEncipherment
|
||||
nsCertType = client
|
||||
nsComment = "Kube Proxy Certificate"
|
||||
subjectAltName = DNS:kube-proxy, IP:127.0.0.1
|
||||
subjectKeyIdentifier = hash
|
||||
|
||||
[kube-proxy_distinguished_name]
|
||||
CN = system:kube-proxy
|
||||
O = system:node-proxier
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
|
||||
|
||||
# Controller Manager
|
||||
[kube-controller-manager]
|
||||
distinguished_name = kube-controller-manager_distinguished_name
|
||||
prompt = no
|
||||
req_extensions = kube-controller-manager_req_extensions
|
||||
|
||||
[kube-controller-manager_req_extensions]
|
||||
basicConstraints = CA:FALSE
|
||||
extendedKeyUsage = clientAuth, serverAuth
|
||||
keyUsage = critical, digitalSignature, keyEncipherment
|
||||
nsCertType = client
|
||||
nsComment = "Kube Controller Manager Certificate"
|
||||
subjectAltName = DNS:kube-controller-manager, IP:127.0.0.1
|
||||
subjectKeyIdentifier = hash
|
||||
|
||||
[kube-controller-manager_distinguished_name]
|
||||
CN = system:kube-controller-manager
|
||||
O = system:kube-controller-manager
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
|
||||
|
||||
# Scheduler
|
||||
[kube-scheduler]
|
||||
distinguished_name = kube-scheduler_distinguished_name
|
||||
prompt = no
|
||||
req_extensions = kube-scheduler_req_extensions
|
||||
|
||||
[kube-scheduler_req_extensions]
|
||||
basicConstraints = CA:FALSE
|
||||
extendedKeyUsage = clientAuth, serverAuth
|
||||
keyUsage = critical, digitalSignature, keyEncipherment
|
||||
nsCertType = client
|
||||
nsComment = "Kube Scheduler Certificate"
|
||||
subjectAltName = DNS:kube-scheduler, IP:127.0.0.1
|
||||
subjectKeyIdentifier = hash
|
||||
|
||||
[kube-scheduler_distinguished_name]
|
||||
CN = system:kube-scheduler
|
||||
O = system:system:kube-scheduler
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
|
||||
|
||||
# API Server
|
||||
#
|
||||
# The Kubernetes API server is automatically assigned the `kubernetes`
|
||||
# internal dns name, which will be linked to the first IP address (`10.32.0.1`)
|
||||
# from the address range (`10.32.0.0/24`) reserved for internal cluster
|
||||
# services.
|
||||
|
||||
[kube-api-server]
|
||||
distinguished_name = kube-api-server_distinguished_name
|
||||
prompt = no
|
||||
req_extensions = kube-api-server_req_extensions
|
||||
|
||||
[kube-api-server_req_extensions]
|
||||
basicConstraints = CA:FALSE
|
||||
extendedKeyUsage = clientAuth, serverAuth
|
||||
keyUsage = critical, digitalSignature, keyEncipherment
|
||||
nsCertType = client, server
|
||||
nsComment = "Kube API Server Certificate"
|
||||
subjectAltName = @kube-api-server_alt_names
|
||||
subjectKeyIdentifier = hash
|
||||
|
||||
[kube-api-server_alt_names]
|
||||
IP.0 = 127.0.0.1
|
||||
IP.1 = 10.32.0.1
|
||||
DNS.0 = kubernetes
|
||||
DNS.1 = kubernetes.default
|
||||
DNS.2 = kubernetes.default.svc
|
||||
DNS.3 = kubernetes.default.svc.cluster
|
||||
DNS.4 = kubernetes.svc.cluster.local
|
||||
DNS.5 = server.kubernetes.local
|
||||
DNS.6 = api-server.kubernetes.local
|
||||
|
||||
[kube-api-server_distinguished_name]
|
||||
CN = kubernetes
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
|
||||
|
||||
[default_req_extensions]
|
||||
basicConstraints = CA:FALSE
|
||||
extendedKeyUsage = clientAuth
|
||||
keyUsage = critical, digitalSignature, keyEncipherment
|
||||
nsCertType = client
|
||||
nsComment = "Admin Client Certificate"
|
||||
subjectKeyIdentifier = hash
|
15
configs/10-bridge.conf
Normal file
15
configs/10-bridge.conf
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"cniVersion": "1.0.0",
|
||||
"name": "bridge",
|
||||
"type": "bridge",
|
||||
"bridge": "cni0",
|
||||
"isGateway": true,
|
||||
"ipMasq": true,
|
||||
"ipam": {
|
||||
"type": "host-local",
|
||||
"ranges": [
|
||||
[{"subnet": "SUBNET"}]
|
||||
],
|
||||
"routes": [{"dst": "0.0.0.0/0"}]
|
||||
}
|
||||
}
|
5
configs/99-loopback.conf
Normal file
5
configs/99-loopback.conf
Normal file
@@ -0,0 +1,5 @@
|
||||
{
|
||||
"cniVersion": "1.1.0",
|
||||
"name": "lo",
|
||||
"type": "loopback"
|
||||
}
|
13
configs/containerd-config.toml
Normal file
13
configs/containerd-config.toml
Normal file
@@ -0,0 +1,13 @@
|
||||
version = 2
|
||||
|
||||
[plugins."io.containerd.grpc.v1.cri"]
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd]
|
||||
snapshotter = "overlayfs"
|
||||
default_runtime_name = "runc"
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
|
||||
runtime_type = "io.containerd.runc.v2"
|
||||
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
|
||||
SystemdCgroup = true
|
||||
[plugins."io.containerd.grpc.v1.cri".cni]
|
||||
bin_dir = "/opt/cni/bin"
|
||||
conf_dir = "/etc/cni/net.d"
|
11
configs/encryption-config.yaml
Normal file
11
configs/encryption-config.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
kind: EncryptionConfiguration
|
||||
apiVersion: apiserver.config.k8s.io/v1
|
||||
resources:
|
||||
- resources:
|
||||
- secrets
|
||||
providers:
|
||||
- aescbc:
|
||||
keys:
|
||||
- name: key1
|
||||
secret: ${ENCRYPTION_KEY}
|
||||
- identity: {}
|
33
configs/kube-apiserver-to-kubelet.yaml
Normal file
33
configs/kube-apiserver-to-kubelet.yaml
Normal file
@@ -0,0 +1,33 @@
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
annotations:
|
||||
rbac.authorization.kubernetes.io/autoupdate: "true"
|
||||
labels:
|
||||
kubernetes.io/bootstrapping: rbac-defaults
|
||||
name: system:kube-apiserver-to-kubelet
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes/proxy
|
||||
- nodes/stats
|
||||
- nodes/log
|
||||
- nodes/spec
|
||||
- nodes/metrics
|
||||
verbs:
|
||||
- "*"
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: system:kube-apiserver
|
||||
namespace: ""
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:kube-apiserver-to-kubelet
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: kubernetes
|
6
configs/kube-proxy-config.yaml
Normal file
6
configs/kube-proxy-config.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
kind: KubeProxyConfiguration
|
||||
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
||||
clientConnection:
|
||||
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
|
||||
mode: "iptables"
|
||||
clusterCIDR: "10.200.0.0/16"
|
6
configs/kube-scheduler.yaml
Normal file
6
configs/kube-scheduler.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
apiVersion: kubescheduler.config.k8s.io/v1
|
||||
kind: KubeSchedulerConfiguration
|
||||
clientConnection:
|
||||
kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
|
||||
leaderElection:
|
||||
leaderElect: true
|
25
configs/kubelet-config.yaml
Normal file
25
configs/kubelet-config.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
kind: KubeletConfiguration
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
address: "0.0.0.0"
|
||||
authentication:
|
||||
anonymous:
|
||||
enabled: false
|
||||
webhook:
|
||||
enabled: true
|
||||
x509:
|
||||
clientCAFile: "/var/lib/kubelet/ca.crt"
|
||||
authorization:
|
||||
mode: Webhook
|
||||
cgroupDriver: systemd
|
||||
containerRuntimeEndpoint: "unix:///var/run/containerd/containerd.sock"
|
||||
enableServer: true
|
||||
failSwapOn: false
|
||||
maxPods: 16
|
||||
memorySwap:
|
||||
swapBehavior: NoSwap
|
||||
port: 10250
|
||||
resolvConf: "/etc/resolv.conf"
|
||||
registerNode: true
|
||||
runtimeRequestTimeout: "15m"
|
||||
tlsCertFile: "/var/lib/kubelet/kubelet.crt"
|
||||
tlsPrivateKeyFile: "/var/lib/kubelet/kubelet.key"
|
@@ -1,206 +0,0 @@
|
||||
# Copyright 2016 The Kubernetes Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: kube-dns
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kube-dns
|
||||
kubernetes.io/cluster-service: "true"
|
||||
addonmanager.kubernetes.io/mode: Reconcile
|
||||
kubernetes.io/name: "KubeDNS"
|
||||
spec:
|
||||
selector:
|
||||
k8s-app: kube-dns
|
||||
clusterIP: 10.32.0.10
|
||||
ports:
|
||||
- name: dns
|
||||
port: 53
|
||||
protocol: UDP
|
||||
- name: dns-tcp
|
||||
port: 53
|
||||
protocol: TCP
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: kube-dns
|
||||
namespace: kube-system
|
||||
labels:
|
||||
kubernetes.io/cluster-service: "true"
|
||||
addonmanager.kubernetes.io/mode: Reconcile
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: kube-dns
|
||||
namespace: kube-system
|
||||
labels:
|
||||
addonmanager.kubernetes.io/mode: EnsureExists
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: kube-dns
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kube-dns
|
||||
kubernetes.io/cluster-service: "true"
|
||||
addonmanager.kubernetes.io/mode: Reconcile
|
||||
spec:
|
||||
# replicas: not specified here:
|
||||
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
|
||||
# 2. Default is 1.
|
||||
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 10%
|
||||
maxUnavailable: 0
|
||||
selector:
|
||||
matchLabels:
|
||||
k8s-app: kube-dns
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-dns
|
||||
annotations:
|
||||
scheduler.alpha.kubernetes.io/critical-pod: ''
|
||||
spec:
|
||||
tolerations:
|
||||
- key: "CriticalAddonsOnly"
|
||||
operator: "Exists"
|
||||
volumes:
|
||||
- name: kube-dns-config
|
||||
configMap:
|
||||
name: kube-dns
|
||||
optional: true
|
||||
containers:
|
||||
- name: kubedns
|
||||
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
|
||||
resources:
|
||||
# TODO: Set memory limits when we've profiled the container for large
|
||||
# clusters, then set request = limit to keep this container in
|
||||
# guaranteed class. Currently, this container falls into the
|
||||
# "burstable" category so the kubelet doesn't backoff from restarting it.
|
||||
limits:
|
||||
memory: 170Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 70Mi
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthcheck/kubedns
|
||||
port: 10054
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 60
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 5
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /readiness
|
||||
port: 8081
|
||||
scheme: HTTP
|
||||
# we poll on pod startup for the Kubernetes master service and
|
||||
# only setup the /readiness HTTP server once that's available.
|
||||
initialDelaySeconds: 3
|
||||
timeoutSeconds: 5
|
||||
args:
|
||||
- --domain=cluster.local.
|
||||
- --dns-port=10053
|
||||
- --config-dir=/kube-dns-config
|
||||
- --v=2
|
||||
env:
|
||||
- name: PROMETHEUS_PORT
|
||||
value: "10055"
|
||||
ports:
|
||||
- containerPort: 10053
|
||||
name: dns-local
|
||||
protocol: UDP
|
||||
- containerPort: 10053
|
||||
name: dns-tcp-local
|
||||
protocol: TCP
|
||||
- containerPort: 10055
|
||||
name: metrics
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- name: kube-dns-config
|
||||
mountPath: /kube-dns-config
|
||||
- name: dnsmasq
|
||||
image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthcheck/dnsmasq
|
||||
port: 10054
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 60
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 5
|
||||
args:
|
||||
- -v=2
|
||||
- -logtostderr
|
||||
- -configDir=/etc/k8s/dns/dnsmasq-nanny
|
||||
- -restartDnsmasq=true
|
||||
- --
|
||||
- -k
|
||||
- --cache-size=1000
|
||||
- --no-negcache
|
||||
- --log-facility=-
|
||||
- --server=/cluster.local/127.0.0.1#10053
|
||||
- --server=/in-addr.arpa/127.0.0.1#10053
|
||||
- --server=/ip6.arpa/127.0.0.1#10053
|
||||
ports:
|
||||
- containerPort: 53
|
||||
name: dns
|
||||
protocol: UDP
|
||||
- containerPort: 53
|
||||
name: dns-tcp
|
||||
protocol: TCP
|
||||
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
|
||||
resources:
|
||||
requests:
|
||||
cpu: 150m
|
||||
memory: 20Mi
|
||||
volumeMounts:
|
||||
- name: kube-dns-config
|
||||
mountPath: /etc/k8s/dns/dnsmasq-nanny
|
||||
- name: sidecar
|
||||
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /metrics
|
||||
port: 10054
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 60
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 5
|
||||
args:
|
||||
- --v=2
|
||||
- --logtostderr
|
||||
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
|
||||
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
|
||||
ports:
|
||||
- containerPort: 10054
|
||||
name: metrics
|
||||
protocol: TCP
|
||||
resources:
|
||||
requests:
|
||||
memory: 20Mi
|
||||
cpu: 10m
|
||||
dnsPolicy: Default # Don't use cluster DNS.
|
||||
serviceAccountName: kube-dns
|
@@ -1,47 +1,33 @@
|
||||
# Prerequisites
|
||||
|
||||
## Google Cloud Platform
|
||||
In this lab you will review the machine requirements necessary to follow this tutorial.
|
||||
|
||||
This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits.
|
||||
## Virtual or Physical Machines
|
||||
|
||||
[Estimated cost](https://cloud.google.com/products/calculator/#id=78df6ced-9c50-48f8-a670-bc5003f2ddaa) to run this tutorial: $0.22 per hour ($5.39 per day).
|
||||
This tutorial requires four (4) virtual or physical ARM64 or AMD64 machines running Debian 12 (bookworm). The following table lists the four machines and their CPU, memory, and storage requirements.
|
||||
|
||||
> The compute resources required for this tutorial exceed the Google Cloud Platform free tier.
|
||||
| Name | Description | CPU | RAM | Storage |
|
||||
|---------|------------------------|-----|-------|---------|
|
||||
| jumpbox | Administration host | 1 | 512MB | 10GB |
|
||||
| server | Kubernetes server | 1 | 2GB | 20GB |
|
||||
| node-0 | Kubernetes worker node | 1 | 2GB | 20GB |
|
||||
| node-1 | Kubernetes worker node | 1 | 2GB | 20GB |
|
||||
|
||||
## Google Cloud Platform SDK
|
||||
How you provision the machines is up to you, the only requirement is that each machine meet the above system requirements including the machine specs and OS version. Once you have all four machines provisioned, verify the OS requirements by viewing the `/etc/os-release` file:
|
||||
|
||||
### Install the Google Cloud SDK
|
||||
|
||||
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility.
|
||||
|
||||
Verify the Google Cloud SDK version is 183.0.0 or higher:
|
||||
|
||||
```
|
||||
gcloud version
|
||||
```bash
|
||||
cat /etc/os-release
|
||||
```
|
||||
|
||||
### Set a Default Compute Region and Zone
|
||||
You should see something similar to the following output:
|
||||
|
||||
This tutorial assumes a default compute region and zone have been configured.
|
||||
|
||||
If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this:
|
||||
|
||||
```
|
||||
gcloud init
|
||||
```text
|
||||
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
|
||||
NAME="Debian GNU/Linux"
|
||||
VERSION_ID="12"
|
||||
VERSION="12 (bookworm)"
|
||||
VERSION_CODENAME=bookworm
|
||||
ID=debian
|
||||
```
|
||||
|
||||
Otherwise set a default compute region:
|
||||
|
||||
```
|
||||
gcloud config set compute/region us-west1
|
||||
```
|
||||
|
||||
Set a default compute zone:
|
||||
|
||||
```
|
||||
gcloud config set compute/zone us-west1-c
|
||||
```
|
||||
|
||||
> Use the `gcloud compute zones list` command to view additional regions and zones.
|
||||
|
||||
Next: [Installing the Client Tools](02-client-tools.md)
|
||||
Next: [setting-up-the-jumpbox](02-jumpbox.md)
|
||||
|
@@ -1,111 +0,0 @@
|
||||
# Installing the Client Tools
|
||||
|
||||
In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl).
|
||||
|
||||
|
||||
## Install CFSSL
|
||||
|
||||
The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
|
||||
|
||||
Download and install `cfssl` and `cfssljson` from the [cfssl repository](https://pkg.cfssl.org):
|
||||
|
||||
### OS X
|
||||
|
||||
```
|
||||
curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
|
||||
curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
|
||||
```
|
||||
|
||||
```
|
||||
chmod +x cfssl cfssljson
|
||||
```
|
||||
|
||||
```
|
||||
sudo mv cfssl cfssljson /usr/local/bin/
|
||||
```
|
||||
|
||||
### Linux
|
||||
|
||||
```
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \
|
||||
https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
|
||||
```
|
||||
|
||||
```
|
||||
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64
|
||||
```
|
||||
|
||||
```
|
||||
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
|
||||
```
|
||||
|
||||
```
|
||||
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
|
||||
```
|
||||
|
||||
### Verification
|
||||
|
||||
Verify `cfssl` version 1.2.0 or higher is installed:
|
||||
|
||||
```
|
||||
cfssl version
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
Version: 1.2.0
|
||||
Revision: dev
|
||||
Runtime: go1.6
|
||||
```
|
||||
|
||||
> The cfssljson command line utility does not provide a way to print its version.
|
||||
|
||||
## Install kubectl
|
||||
|
||||
The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
|
||||
|
||||
### OS X
|
||||
|
||||
```
|
||||
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/darwin/amd64/kubectl
|
||||
```
|
||||
|
||||
```
|
||||
chmod +x kubectl
|
||||
```
|
||||
|
||||
```
|
||||
sudo mv kubectl /usr/local/bin/
|
||||
```
|
||||
|
||||
### Linux
|
||||
|
||||
```
|
||||
wget https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl
|
||||
```
|
||||
|
||||
```
|
||||
chmod +x kubectl
|
||||
```
|
||||
|
||||
```
|
||||
sudo mv kubectl /usr/local/bin/
|
||||
```
|
||||
|
||||
### Verification
|
||||
|
||||
Verify `kubectl` version 1.9.0 or higher is installed:
|
||||
|
||||
```
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
|
||||
```
|
||||
|
||||
Next: [Provisioning Compute Resources](03-compute-resources.md)
|
140
docs/02-jumpbox.md
Normal file
140
docs/02-jumpbox.md
Normal file
@@ -0,0 +1,140 @@
|
||||
# Set Up The Jumpbox
|
||||
|
||||
In this lab you will set up one of the four machines to be a `jumpbox`. This machine will be used to run commands throughout this tutorial. While a dedicated machine is being used to ensure consistency, these commands can also be run from just about any machine including your personal workstation running macOS or Linux.
|
||||
|
||||
Think of the `jumpbox` as the administration machine that you will use as a home base when setting up your Kubernetes cluster from the ground up. Before we get started we need to install a few command line utilities and clone the Kubernetes The Hard Way git repository, which contains some additional configuration files that will be used to configure various Kubernetes components throughout this tutorial.
|
||||
|
||||
Log in to the `jumpbox`:
|
||||
|
||||
```bash
|
||||
ssh root@jumpbox
|
||||
```
|
||||
|
||||
All commands will be run as the `root` user. This is being done for the sake of convenience, and will help reduce the number of commands required to set everything up.
|
||||
|
||||
### Install Command Line Utilities
|
||||
|
||||
Now that you are logged into the `jumpbox` machine as the `root` user, you will install the command line utilities that will be used to preform various tasks throughout the tutorial.
|
||||
|
||||
```bash
|
||||
{
|
||||
apt-get update
|
||||
apt-get -y install wget curl vim openssl git
|
||||
}
|
||||
```
|
||||
|
||||
### Sync GitHub Repository
|
||||
|
||||
Now it's time to download a copy of this tutorial which contains the configuration files and templates that will be used build your Kubernetes cluster from the ground up. Clone the Kubernetes The Hard Way git repository using the `git` command:
|
||||
|
||||
```bash
|
||||
git clone --depth 1 \
|
||||
https://github.com/kelseyhightower/kubernetes-the-hard-way.git
|
||||
```
|
||||
|
||||
Change into the `kubernetes-the-hard-way` directory:
|
||||
|
||||
```bash
|
||||
cd kubernetes-the-hard-way
|
||||
```
|
||||
|
||||
This will be the working directory for the rest of the tutorial. If you ever get lost run the `pwd` command to verify you are in the right directory when running commands on the `jumpbox`:
|
||||
|
||||
```bash
|
||||
pwd
|
||||
```
|
||||
|
||||
```text
|
||||
/root/kubernetes-the-hard-way
|
||||
```
|
||||
|
||||
### Download Binaries
|
||||
|
||||
In this section you will download the binaries for the various Kubernetes components. The binaries will be stored in the `downloads` directory on the `jumpbox`, which will reduce the amount of internet bandwidth required to complete this tutorial as we avoid downloading the binaries multiple times for each machine in our Kubernetes cluster.
|
||||
|
||||
The binaries that will be downloaded are listed in either the `downloads-amd64.txt` or `downloads-arm64.txt` file depending on your hardware architecture, which you can review using the `cat` command:
|
||||
|
||||
```bash
|
||||
cat downloads-$(dpkg --print-architecture).txt
|
||||
```
|
||||
|
||||
Download the binaries into a directory called `downloads` using the `wget` command:
|
||||
|
||||
```bash
|
||||
wget -q --show-progress \
|
||||
--https-only \
|
||||
--timestamping \
|
||||
-P downloads \
|
||||
-i downloads-$(dpkg --print-architecture).txt
|
||||
```
|
||||
|
||||
Depending on your internet connection speed it may take a while to download over `500` megabytes of binaries, and once the download is complete, you can list them using the `ls` command:
|
||||
|
||||
```bash
|
||||
ls -oh downloads
|
||||
```
|
||||
|
||||
Extract the component binaries from the release archives and organize them under the `downloads` directory.
|
||||
|
||||
```bash
|
||||
{
|
||||
ARCH=$(dpkg --print-architecture)
|
||||
mkdir -p downloads/{client,cni-plugins,controller,worker}
|
||||
tar -xvf downloads/crictl-v1.32.0-linux-${ARCH}.tar.gz \
|
||||
-C downloads/worker/
|
||||
tar -xvf downloads/containerd-2.1.0-beta.0-linux-${ARCH}.tar.gz \
|
||||
--strip-components 1 \
|
||||
-C downloads/worker/
|
||||
tar -xvf downloads/cni-plugins-linux-${ARCH}-v1.6.2.tgz \
|
||||
-C downloads/cni-plugins/
|
||||
tar -xvf downloads/etcd-v3.6.0-rc.3-linux-${ARCH}.tar.gz \
|
||||
-C downloads/ \
|
||||
--strip-components 1 \
|
||||
etcd-v3.6.0-rc.3-linux-${ARCH}/etcdctl \
|
||||
etcd-v3.6.0-rc.3-linux-${ARCH}/etcd
|
||||
mv downloads/{etcdctl,kubectl} downloads/client/
|
||||
mv downloads/{etcd,kube-apiserver,kube-controller-manager,kube-scheduler} \
|
||||
downloads/controller/
|
||||
mv downloads/{kubelet,kube-proxy} downloads/worker/
|
||||
mv downloads/runc.${ARCH} downloads/worker/runc
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
rm -rf downloads/*gz
|
||||
```
|
||||
|
||||
Make the binaries executable.
|
||||
|
||||
```bash
|
||||
{
|
||||
chmod +x downloads/{client,cni-plugins,controller,worker}/*
|
||||
}
|
||||
```
|
||||
|
||||
### Install kubectl
|
||||
|
||||
In this section you will install the `kubectl`, the official Kubernetes client command line tool, on the `jumpbox` machine. `kubectl` will be used to interact with the Kubernetes control plane once your cluster is provisioned later in this tutorial.
|
||||
|
||||
Use the `chmod` command to make the `kubectl` binary executable and move it to the `/usr/local/bin/` directory:
|
||||
|
||||
```bash
|
||||
{
|
||||
cp downloads/client/kubectl /usr/local/bin/
|
||||
}
|
||||
```
|
||||
|
||||
At this point `kubectl` is installed and can be verified by running the `kubectl` command:
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
```text
|
||||
Client Version: v1.32.3
|
||||
Kustomize Version: v5.5.0
|
||||
```
|
||||
|
||||
At this point the `jumpbox` has been set up with all the command line tools and utilities necessary to complete the labs in this tutorial.
|
||||
|
||||
Next: [Provisioning Compute Resources](03-compute-resources.md)
|
@@ -1,162 +1,224 @@
|
||||
# Provisioning Compute Resources
|
||||
|
||||
Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single [compute zone](https://cloud.google.com/compute/docs/regions-zones/regions-zones).
|
||||
Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the machines required for setting up a Kubernetes cluster.
|
||||
|
||||
> Ensure a default compute zone and region have been set as described in the [Prerequisites](01-prerequisites.md#set-a-default-compute-region-and-zone) lab.
|
||||
## Machine Database
|
||||
|
||||
## Networking
|
||||
This tutorial will leverage a text file, which will serve as a machine database, to store the various machine attributes that will be used when setting up the Kubernetes control plane and worker nodes. The following schema represents entries in the machine database, one entry per line:
|
||||
|
||||
The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model) assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) can limit how groups of containers are allowed to communicate with each other and external network endpoints.
|
||||
|
||||
> Setting up network policies is out of scope for this tutorial.
|
||||
|
||||
### Virtual Private Cloud Network
|
||||
|
||||
In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) (VPC) network will be setup to host the Kubernetes cluster.
|
||||
|
||||
Create the `kubernetes-the-hard-way` custom VPC network:
|
||||
|
||||
```
|
||||
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
|
||||
```text
|
||||
IPV4_ADDRESS FQDN HOSTNAME POD_SUBNET
|
||||
```
|
||||
|
||||
A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
|
||||
Each of the columns corresponds to a machine IP address `IPV4_ADDRESS`, fully qualified domain name `FQDN`, host name `HOSTNAME`, and the IP subnet `POD_SUBNET`. Kubernetes assigns one IP address per `pod` and the `POD_SUBNET` represents the unique IP address range assigned to each machine in the cluster for doing so.
|
||||
|
||||
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
|
||||
Here is an example machine database similar to the one used when creating this tutorial. Notice the IP addresses have been masked out. Your machines can be assigned any IP address as long as each machine is reachable from each other and the `jumpbox`.
|
||||
|
||||
```
|
||||
gcloud compute networks subnets create kubernetes \
|
||||
--network kubernetes-the-hard-way \
|
||||
--range 10.240.0.0/24
|
||||
```bash
|
||||
cat machines.txt
|
||||
```
|
||||
|
||||
> The `10.240.0.0/24` IP address range can host up to 254 compute instances.
|
||||
|
||||
### Firewall Rules
|
||||
|
||||
Create a firewall rule that allows internal communication across all protocols:
|
||||
|
||||
```
|
||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
|
||||
--allow tcp,udp,icmp \
|
||||
--network kubernetes-the-hard-way \
|
||||
--source-ranges 10.240.0.0/24,10.200.0.0/16
|
||||
```text
|
||||
XXX.XXX.XXX.XXX server.kubernetes.local server
|
||||
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0 10.200.0.0/24
|
||||
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1 10.200.1.0/24
|
||||
```
|
||||
|
||||
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
|
||||
Now it's your turn to create a `machines.txt` file with the details for the three machines you will be using to create your Kubernetes cluster. Use the example machine database from above and add the details for your machines.
|
||||
|
||||
```
|
||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
|
||||
--allow tcp:22,tcp:6443,icmp \
|
||||
--network kubernetes-the-hard-way \
|
||||
--source-ranges 0.0.0.0/0
|
||||
## Configuring SSH Access
|
||||
|
||||
SSH will be used to configure the machines in the cluster. Verify that you have `root` SSH access to each machine listed in your machine database. You may need to enable root SSH access on each node by updating the sshd_config file and restarting the SSH server.
|
||||
|
||||
### Enable root SSH Access
|
||||
|
||||
If `root` SSH access is enabled for each of your machines you can skip this section.
|
||||
|
||||
By default, a new `debian` install disables SSH access for the `root` user. This is done for security reasons as the `root` user has total administrative control of unix-like systems. If a weak password is used on a machine connected to the internet, well, let's just say it's only a matter of time before your machine belongs to someone else. As mentioned earlier, we are going to enable `root` access over SSH in order to streamline the steps in this tutorial. Security is a tradeoff, and in this case, we are optimizing for convenience. Log on to each machine via SSH using your user account, then switch to the `root` user using the `su` command:
|
||||
|
||||
```bash
|
||||
su - root
|
||||
```
|
||||
|
||||
> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients.
|
||||
Edit the `/etc/ssh/sshd_config` SSH daemon configuration file and set the `PermitRootLogin` option to `yes`:
|
||||
|
||||
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
|
||||
|
||||
```
|
||||
gcloud compute firewall-rules list --filter "network: kubernetes-the-hard-way"
|
||||
```bash
|
||||
sed -i \
|
||||
's/^#*PermitRootLogin.*/PermitRootLogin yes/' \
|
||||
/etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
> output
|
||||
Restart the `sshd` SSH server to pick up the updated configuration file:
|
||||
|
||||
```
|
||||
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
|
||||
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp
|
||||
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp
|
||||
```bash
|
||||
systemctl restart sshd
|
||||
```
|
||||
|
||||
### Kubernetes Public IP Address
|
||||
### Generate and Distribute SSH Keys
|
||||
|
||||
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
|
||||
In this section you will generate and distribute an SSH keypair to the `server`, `node-0`, and `node-1`, machines, which will be used to run commands on those machines throughout this tutorial. Run the following commands from the `jumpbox` machine.
|
||||
|
||||
```
|
||||
gcloud compute addresses create kubernetes-the-hard-way \
|
||||
--region $(gcloud config get-value compute/region)
|
||||
Generate a new SSH key:
|
||||
|
||||
```bash
|
||||
ssh-keygen
|
||||
```
|
||||
|
||||
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
|
||||
|
||||
```
|
||||
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
|
||||
```text
|
||||
Generating public/private rsa key pair.
|
||||
Enter file in which to save the key (/root/.ssh/id_rsa):
|
||||
Enter passphrase (empty for no passphrase):
|
||||
Enter same passphrase again:
|
||||
Your identification has been saved in /root/.ssh/id_rsa
|
||||
Your public key has been saved in /root/.ssh/id_rsa.pub
|
||||
```
|
||||
|
||||
> output
|
||||
Copy the SSH public key to each machine:
|
||||
|
||||
```
|
||||
NAME REGION ADDRESS STATUS
|
||||
kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
ssh-copy-id root@${IP}
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
## Compute Instances
|
||||
|
||||
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 16.04, which has good support for the [cri-containerd container runtime](https://github.com/kubernetes-incubator/cri-containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
|
||||
|
||||
### Kubernetes Controllers
|
||||
|
||||
Create three compute instances which will host the Kubernetes control plane:
|
||||
Once each key is added, verify SSH public key access is working:
|
||||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
ssh -n root@${IP} hostname
|
||||
done < machines.txt
|
||||
```
|
||||
for i in 0 1 2; do
|
||||
gcloud compute instances create controller-${i} \
|
||||
--async \
|
||||
--boot-disk-size 200GB \
|
||||
--can-ip-forward \
|
||||
--image-family ubuntu-1604-lts \
|
||||
--image-project ubuntu-os-cloud \
|
||||
--machine-type n1-standard-1 \
|
||||
--private-network-ip 10.240.0.1${i} \
|
||||
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
|
||||
--subnet kubernetes \
|
||||
--tags kubernetes-the-hard-way,controller
|
||||
|
||||
```text
|
||||
server
|
||||
node-0
|
||||
node-1
|
||||
```
|
||||
|
||||
## Hostnames
|
||||
|
||||
In this section you will assign hostnames to the `server`, `node-0`, and `node-1` machines. The hostname will be used when executing commands from the `jumpbox` to each machine. The hostname also plays a major role within the cluster. Instead of Kubernetes clients using an IP address to issue commands to the Kubernetes API server, those clients will use the `server` hostname instead. Hostnames are also used by each worker machine, `node-0` and `node-1` when registering with a given Kubernetes cluster.
|
||||
|
||||
To configure the hostname for each machine, run the following commands on the `jumpbox`.
|
||||
|
||||
Set the hostname on each machine listed in the `machines.txt` file:
|
||||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
CMD="sed -i 's/^127.0.1.1.*/127.0.1.1\t${FQDN} ${HOST}/' /etc/hosts"
|
||||
ssh -n root@${IP} "$CMD"
|
||||
ssh -n root@${IP} hostnamectl set-hostname ${HOST}
|
||||
ssh -n root@${IP} systemctl restart systemd-hostnamed
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
Verify the hostname is set on each machine:
|
||||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
ssh -n root@${IP} hostname --fqdn
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
```text
|
||||
server.kubernetes.local
|
||||
node-0.kubernetes.local
|
||||
node-1.kubernetes.local
|
||||
```
|
||||
|
||||
## Host Lookup Table
|
||||
|
||||
In this section you will generate a `hosts` file which will be appended to `/etc/hosts` file on the `jumpbox` and to the `/etc/hosts` files on all three cluster members used for this tutorial. This will allow each machine to be reachable using a hostname such as `server`, `node-0`, or `node-1`.
|
||||
|
||||
Create a new `hosts` file and add a header to identify the machines being added:
|
||||
|
||||
```bash
|
||||
echo "" > hosts
|
||||
echo "# Kubernetes The Hard Way" >> hosts
|
||||
```
|
||||
|
||||
Generate a host entry for each machine in the `machines.txt` file and append it to the `hosts` file:
|
||||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
ENTRY="${IP} ${FQDN} ${HOST}"
|
||||
echo $ENTRY >> hosts
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
Review the host entries in the `hosts` file:
|
||||
|
||||
```bash
|
||||
cat hosts
|
||||
```
|
||||
|
||||
```text
|
||||
|
||||
# Kubernetes The Hard Way
|
||||
XXX.XXX.XXX.XXX server.kubernetes.local server
|
||||
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0
|
||||
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1
|
||||
```
|
||||
|
||||
## Adding `/etc/hosts` Entries To A Local Machine
|
||||
|
||||
In this section you will append the DNS entries from the `hosts` file to the local `/etc/hosts` file on your `jumpbox` machine.
|
||||
|
||||
Append the DNS entries from `hosts` to `/etc/hosts`:
|
||||
|
||||
```bash
|
||||
cat hosts >> /etc/hosts
|
||||
```
|
||||
|
||||
Verify that the `/etc/hosts` file has been updated:
|
||||
|
||||
```bash
|
||||
cat /etc/hosts
|
||||
```
|
||||
|
||||
```text
|
||||
127.0.0.1 localhost
|
||||
127.0.1.1 jumpbox
|
||||
|
||||
# The following lines are desirable for IPv6 capable hosts
|
||||
::1 localhost ip6-localhost ip6-loopback
|
||||
ff02::1 ip6-allnodes
|
||||
ff02::2 ip6-allrouters
|
||||
|
||||
# Kubernetes The Hard Way
|
||||
XXX.XXX.XXX.XXX server.kubernetes.local server
|
||||
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0
|
||||
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1
|
||||
```
|
||||
|
||||
At this point you should be able to SSH to each machine listed in the `machines.txt` file using a hostname.
|
||||
|
||||
```bash
|
||||
for host in server node-0 node-1
|
||||
do ssh root@${host} hostname
|
||||
done
|
||||
```
|
||||
|
||||
### Kubernetes Workers
|
||||
|
||||
Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The `pod-cidr` instance metadata will be used to expose pod subnet allocations to compute instances at runtime.
|
||||
|
||||
> The Kubernetes cluster CIDR range is defined by the Controller Manager's `--cluster-cidr` flag. In this tutorial the cluster CIDR range will be set to `10.200.0.0/16`, which supports 254 subnets.
|
||||
|
||||
Create three compute instances which will host the Kubernetes worker nodes:
|
||||
|
||||
```
|
||||
for i in 0 1 2; do
|
||||
gcloud compute instances create worker-${i} \
|
||||
--async \
|
||||
--boot-disk-size 200GB \
|
||||
--can-ip-forward \
|
||||
--image-family ubuntu-1604-lts \
|
||||
--image-project ubuntu-os-cloud \
|
||||
--machine-type n1-standard-1 \
|
||||
--metadata pod-cidr=10.200.${i}.0/24 \
|
||||
--private-network-ip 10.240.0.2${i} \
|
||||
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
|
||||
--subnet kubernetes \
|
||||
--tags kubernetes-the-hard-way,worker
|
||||
done
|
||||
```text
|
||||
server
|
||||
node-0
|
||||
node-1
|
||||
```
|
||||
|
||||
### Verification
|
||||
## Adding `/etc/hosts` Entries To The Remote Machines
|
||||
|
||||
List the compute instances in your default compute zone:
|
||||
In this section you will append the host entries from `hosts` to `/etc/hosts` on each machine listed in the `machines.txt` text file.
|
||||
|
||||
```
|
||||
gcloud compute instances list
|
||||
Copy the `hosts` file to each machine and append the contents to `/etc/hosts`:
|
||||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
scp hosts root@${HOST}:~/
|
||||
ssh -n \
|
||||
root@${HOST} "cat hosts >> /etc/hosts"
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
|
||||
controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING
|
||||
controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING
|
||||
controller-2 us-west1-c n1-standard-1 10.240.0.12 XX.XXX.XXX.XX RUNNING
|
||||
worker-0 us-west1-c n1-standard-1 10.240.0.20 XXX.XXX.XXX.XX RUNNING
|
||||
worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX RUNNING
|
||||
worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING
|
||||
```
|
||||
At this point, hostnames can be used when connecting to machines from your `jumpbox` machine, or any of the three machines in the Kubernetes cluster. Instead of using IP addresses you can now connect to machines using a hostname such as `server`, `node-0`, or `node-1`.
|
||||
|
||||
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)
|
||||
|
@@ -1,283 +1,108 @@
|
||||
# Provisioning a CA and Generating TLS Certificates
|
||||
|
||||
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kubelet, and kube-proxy.
|
||||
In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using openssl to bootstrap a Certificate Authority, and generate TLS certificates for the following components: kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy. The commands in this section should be run from the `jumpbox`.
|
||||
|
||||
## Certificate Authority
|
||||
|
||||
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
|
||||
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates for the other Kubernetes components. Setting up CA and generating certificates using `openssl` can be time-consuming, especially when doing it for the first time. To streamline this lab, I've included an openssl configuration file `ca.conf`, which defines all the details needed to generate certificates for each Kubernetes component.
|
||||
|
||||
Create the CA configuration file:
|
||||
Take a moment to review the `ca.conf` configuration file:
|
||||
|
||||
```bash
|
||||
cat ca.conf
|
||||
```
|
||||
cat > ca-config.json <<EOF
|
||||
|
||||
You don't need to understand everything in the `ca.conf` file to complete this tutorial, but you should consider it a starting point for learning `openssl` and the configuration that goes into managing certificates at a high level.
|
||||
|
||||
Every certificate authority starts with a private key and root certificate. In this section we are going to create a self-signed certificate authority, and while that's all we need for this tutorial, this shouldn't be considered something you would do in a real-world production environment.
|
||||
|
||||
Generate the CA configuration file, certificate, and private key:
|
||||
|
||||
```bash
|
||||
{
|
||||
"signing": {
|
||||
"default": {
|
||||
"expiry": "8760h"
|
||||
},
|
||||
"profiles": {
|
||||
"kubernetes": {
|
||||
"usages": ["signing", "key encipherment", "server auth", "client auth"],
|
||||
"expiry": "8760h"
|
||||
}
|
||||
}
|
||||
}
|
||||
openssl genrsa -out ca.key 4096
|
||||
openssl req -x509 -new -sha512 -noenc \
|
||||
-key ca.key -days 3653 \
|
||||
-config ca.conf \
|
||||
-out ca.crt
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Create the CA certificate signing request:
|
||||
|
||||
```
|
||||
cat > ca-csr.json <<EOF
|
||||
{
|
||||
"CN": "Kubernetes",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"C": "US",
|
||||
"L": "Portland",
|
||||
"O": "Kubernetes",
|
||||
"OU": "CA",
|
||||
"ST": "Oregon"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Generate the CA certificate and private key:
|
||||
|
||||
```
|
||||
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
|
||||
```
|
||||
|
||||
Results:
|
||||
|
||||
```
|
||||
ca-key.pem
|
||||
ca.pem
|
||||
```txt
|
||||
ca.crt ca.key
|
||||
```
|
||||
|
||||
## Client and Server Certificates
|
||||
## Create Client and Server Certificates
|
||||
|
||||
In this section you will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes `admin` user.
|
||||
|
||||
### The Admin Client Certificate
|
||||
Generate the certificates and private keys:
|
||||
|
||||
Create the `admin` client certificate signing request:
|
||||
|
||||
```
|
||||
cat > admin-csr.json <<EOF
|
||||
{
|
||||
"CN": "admin",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"C": "US",
|
||||
"L": "Portland",
|
||||
"O": "system:masters",
|
||||
"OU": "Kubernetes The Hard Way",
|
||||
"ST": "Oregon"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
```bash
|
||||
certs=(
|
||||
"admin" "node-0" "node-1"
|
||||
"kube-proxy" "kube-scheduler"
|
||||
"kube-controller-manager"
|
||||
"kube-api-server"
|
||||
"service-accounts"
|
||||
)
|
||||
```
|
||||
|
||||
Generate the `admin` client certificate and private key:
|
||||
```bash
|
||||
for i in ${certs[*]}; do
|
||||
openssl genrsa -out "${i}.key" 4096
|
||||
|
||||
```
|
||||
cfssl gencert \
|
||||
-ca=ca.pem \
|
||||
-ca-key=ca-key.pem \
|
||||
-config=ca-config.json \
|
||||
-profile=kubernetes \
|
||||
admin-csr.json | cfssljson -bare admin
|
||||
```
|
||||
openssl req -new -key "${i}.key" -sha256 \
|
||||
-config "ca.conf" -section ${i} \
|
||||
-out "${i}.csr"
|
||||
|
||||
Results:
|
||||
|
||||
```
|
||||
admin-key.pem
|
||||
admin.pem
|
||||
```
|
||||
|
||||
### The Kubelet Client Certificates
|
||||
|
||||
Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/docs/admin/authorization/node/) called Node Authorizer, that specifically authorizes API requests made by [Kubelets](https://kubernetes.io/docs/concepts/overview/components/#kubelet). In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the `system:nodes` group, with a username of `system:node:<nodeName>`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
|
||||
|
||||
Generate a certificate and private key for each Kubernetes worker node:
|
||||
|
||||
```
|
||||
for instance in worker-0 worker-1 worker-2; do
|
||||
cat > ${instance}-csr.json <<EOF
|
||||
{
|
||||
"CN": "system:node:${instance}",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"C": "US",
|
||||
"L": "Portland",
|
||||
"O": "system:nodes",
|
||||
"OU": "Kubernetes The Hard Way",
|
||||
"ST": "Oregon"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
|
||||
EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
|
||||
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
|
||||
|
||||
INTERNAL_IP=$(gcloud compute instances describe ${instance} \
|
||||
--format 'value(networkInterfaces[0].networkIP)')
|
||||
|
||||
cfssl gencert \
|
||||
-ca=ca.pem \
|
||||
-ca-key=ca-key.pem \
|
||||
-config=ca-config.json \
|
||||
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
|
||||
-profile=kubernetes \
|
||||
${instance}-csr.json | cfssljson -bare ${instance}
|
||||
openssl x509 -req -days 3653 -in "${i}.csr" \
|
||||
-copy_extensions copyall \
|
||||
-sha256 -CA "ca.crt" \
|
||||
-CAkey "ca.key" \
|
||||
-CAcreateserial \
|
||||
-out "${i}.crt"
|
||||
done
|
||||
```
|
||||
|
||||
Results:
|
||||
The results of running the above command will generate a private key, certificate request, and signed SSL certificate for each of the Kubernetes components. You can list the generated files with the following command:
|
||||
|
||||
```
|
||||
worker-0-key.pem
|
||||
worker-0.pem
|
||||
worker-1-key.pem
|
||||
worker-1.pem
|
||||
worker-2-key.pem
|
||||
worker-2.pem
|
||||
```
|
||||
|
||||
### The kube-proxy Client Certificate
|
||||
|
||||
Create the `kube-proxy` client certificate signing request:
|
||||
|
||||
```
|
||||
cat > kube-proxy-csr.json <<EOF
|
||||
{
|
||||
"CN": "system:kube-proxy",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"C": "US",
|
||||
"L": "Portland",
|
||||
"O": "system:node-proxier",
|
||||
"OU": "Kubernetes The Hard Way",
|
||||
"ST": "Oregon"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Generate the `kube-proxy` client certificate and private key:
|
||||
|
||||
```
|
||||
cfssl gencert \
|
||||
-ca=ca.pem \
|
||||
-ca-key=ca-key.pem \
|
||||
-config=ca-config.json \
|
||||
-profile=kubernetes \
|
||||
kube-proxy-csr.json | cfssljson -bare kube-proxy
|
||||
```
|
||||
|
||||
Results:
|
||||
|
||||
```
|
||||
kube-proxy-key.pem
|
||||
kube-proxy.pem
|
||||
```
|
||||
|
||||
### The Kubernetes API Server Certificate
|
||||
|
||||
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
|
||||
|
||||
Retrieve the `kubernetes-the-hard-way` static IP address:
|
||||
|
||||
```
|
||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||
--region $(gcloud config get-value compute/region) \
|
||||
--format 'value(address)')
|
||||
```
|
||||
|
||||
Create the Kubernetes API Server certificate signing request:
|
||||
|
||||
```
|
||||
cat > kubernetes-csr.json <<EOF
|
||||
{
|
||||
"CN": "kubernetes",
|
||||
"key": {
|
||||
"algo": "rsa",
|
||||
"size": 2048
|
||||
},
|
||||
"names": [
|
||||
{
|
||||
"C": "US",
|
||||
"L": "Portland",
|
||||
"O": "Kubernetes",
|
||||
"OU": "Kubernetes The Hard Way",
|
||||
"ST": "Oregon"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Generate the Kubernetes API Server certificate and private key:
|
||||
|
||||
```
|
||||
cfssl gencert \
|
||||
-ca=ca.pem \
|
||||
-ca-key=ca-key.pem \
|
||||
-config=ca-config.json \
|
||||
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,kubernetes.default \
|
||||
-profile=kubernetes \
|
||||
kubernetes-csr.json | cfssljson -bare kubernetes
|
||||
```
|
||||
|
||||
Results:
|
||||
|
||||
```
|
||||
kubernetes-key.pem
|
||||
kubernetes.pem
|
||||
```bash
|
||||
ls -1 *.crt *.key *.csr
|
||||
```
|
||||
|
||||
## Distribute the Client and Server Certificates
|
||||
|
||||
Copy the appropriate certificates and private keys to each worker instance:
|
||||
In this section you will copy the various certificates to every machine at a path where each Kubernetes component will search for its certificate pair. In a real-world environment these certificates should be treated like a set of sensitive secrets as they are used as credentials by the Kubernetes components to authenticate to each other.
|
||||
|
||||
```
|
||||
for instance in worker-0 worker-1 worker-2; do
|
||||
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
|
||||
Copy the appropriate certificates and private keys to the `node-0` and `node-1` machines:
|
||||
|
||||
```bash
|
||||
for host in node-0 node-1; do
|
||||
ssh root@${host} mkdir /var/lib/kubelet/
|
||||
|
||||
scp ca.crt root@${host}:/var/lib/kubelet/
|
||||
|
||||
scp ${host}.crt \
|
||||
root@${host}:/var/lib/kubelet/kubelet.crt
|
||||
|
||||
scp ${host}.key \
|
||||
root@${host}:/var/lib/kubelet/kubelet.key
|
||||
done
|
||||
```
|
||||
|
||||
Copy the appropriate certificates and private keys to each controller instance:
|
||||
Copy the appropriate certificates and private keys to the `server` machine:
|
||||
|
||||
```
|
||||
for instance in controller-0 controller-1 controller-2; do
|
||||
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem ${instance}:~/
|
||||
done
|
||||
```bash
|
||||
scp \
|
||||
ca.key ca.crt \
|
||||
kube-api-server.key kube-api-server.crt \
|
||||
service-accounts.key service-accounts.crt \
|
||||
root@server:~/
|
||||
```
|
||||
|
||||
> The `kube-proxy` and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
|
||||
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
|
||||
|
||||
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)
|
||||
|
@@ -1,101 +1,210 @@
|
||||
# Generating Kubernetes Configuration Files for Authentication
|
||||
|
||||
In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
|
||||
In this lab you will generate [Kubernetes client configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), typically called kubeconfigs, which configure Kubernetes clients to connect and authenticate to Kubernetes API Servers.
|
||||
|
||||
## Client Authentication Configs
|
||||
|
||||
In this section you will generate kubeconfig files for the `kubelet` and `kube-proxy` clients.
|
||||
|
||||
> The `scheduler` and `controller manager` access the Kubernetes API Server locally over an insecure API port which does not require authentication. The Kubernetes API Server's insecure port is only enabled for local access.
|
||||
|
||||
### Kubernetes Public IP Address
|
||||
|
||||
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
|
||||
|
||||
Retrieve the `kubernetes-the-hard-way` static IP address:
|
||||
|
||||
```
|
||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||
--region $(gcloud config get-value compute/region) \
|
||||
--format 'value(address)')
|
||||
```
|
||||
In this section you will generate kubeconfig files for the `kubelet` and the `admin` user.
|
||||
|
||||
### The kubelet Kubernetes Configuration File
|
||||
|
||||
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
|
||||
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/reference/access-authn-authz/node/).
|
||||
|
||||
Generate a kubeconfig file for each worker node:
|
||||
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
|
||||
|
||||
```
|
||||
for instance in worker-0 worker-1 worker-2; do
|
||||
Generate a kubeconfig file for the `node-0` and `node-1` worker nodes:
|
||||
|
||||
```bash
|
||||
for host in node-0 node-1; do
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.pem \
|
||||
--certificate-authority=ca.crt \
|
||||
--embed-certs=true \
|
||||
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
|
||||
--kubeconfig=${instance}.kubeconfig
|
||||
--server=https://server.kubernetes.local:6443 \
|
||||
--kubeconfig=${host}.kubeconfig
|
||||
|
||||
kubectl config set-credentials system:node:${instance} \
|
||||
--client-certificate=${instance}.pem \
|
||||
--client-key=${instance}-key.pem \
|
||||
kubectl config set-credentials system:node:${host} \
|
||||
--client-certificate=${host}.crt \
|
||||
--client-key=${host}.key \
|
||||
--embed-certs=true \
|
||||
--kubeconfig=${instance}.kubeconfig
|
||||
--kubeconfig=${host}.kubeconfig
|
||||
|
||||
kubectl config set-context default \
|
||||
--cluster=kubernetes-the-hard-way \
|
||||
--user=system:node:${instance} \
|
||||
--kubeconfig=${instance}.kubeconfig
|
||||
--user=system:node:${host} \
|
||||
--kubeconfig=${host}.kubeconfig
|
||||
|
||||
kubectl config use-context default --kubeconfig=${instance}.kubeconfig
|
||||
kubectl config use-context default \
|
||||
--kubeconfig=${host}.kubeconfig
|
||||
done
|
||||
```
|
||||
|
||||
Results:
|
||||
|
||||
```
|
||||
worker-0.kubeconfig
|
||||
worker-1.kubeconfig
|
||||
worker-2.kubeconfig
|
||||
```text
|
||||
node-0.kubeconfig
|
||||
node-1.kubeconfig
|
||||
```
|
||||
|
||||
### The kube-proxy Kubernetes Configuration File
|
||||
|
||||
Generate a kubeconfig file for the `kube-proxy` service:
|
||||
|
||||
```
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.pem \
|
||||
--embed-certs=true \
|
||||
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
|
||||
--kubeconfig=kube-proxy.kubeconfig
|
||||
```bash
|
||||
{
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.crt \
|
||||
--embed-certs=true \
|
||||
--server=https://server.kubernetes.local:6443 \
|
||||
--kubeconfig=kube-proxy.kubeconfig
|
||||
|
||||
kubectl config set-credentials system:kube-proxy \
|
||||
--client-certificate=kube-proxy.crt \
|
||||
--client-key=kube-proxy.key \
|
||||
--embed-certs=true \
|
||||
--kubeconfig=kube-proxy.kubeconfig
|
||||
|
||||
kubectl config set-context default \
|
||||
--cluster=kubernetes-the-hard-way \
|
||||
--user=system:kube-proxy \
|
||||
--kubeconfig=kube-proxy.kubeconfig
|
||||
|
||||
kubectl config use-context default \
|
||||
--kubeconfig=kube-proxy.kubeconfig
|
||||
}
|
||||
```
|
||||
|
||||
```
|
||||
kubectl config set-credentials kube-proxy \
|
||||
--client-certificate=kube-proxy.pem \
|
||||
--client-key=kube-proxy-key.pem \
|
||||
--embed-certs=true \
|
||||
--kubeconfig=kube-proxy.kubeconfig
|
||||
Results:
|
||||
|
||||
```text
|
||||
kube-proxy.kubeconfig
|
||||
```
|
||||
|
||||
```
|
||||
kubectl config set-context default \
|
||||
--cluster=kubernetes-the-hard-way \
|
||||
--user=kube-proxy \
|
||||
--kubeconfig=kube-proxy.kubeconfig
|
||||
### The kube-controller-manager Kubernetes Configuration File
|
||||
|
||||
Generate a kubeconfig file for the `kube-controller-manager` service:
|
||||
|
||||
```bash
|
||||
{
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.crt \
|
||||
--embed-certs=true \
|
||||
--server=https://server.kubernetes.local:6443 \
|
||||
--kubeconfig=kube-controller-manager.kubeconfig
|
||||
|
||||
kubectl config set-credentials system:kube-controller-manager \
|
||||
--client-certificate=kube-controller-manager.crt \
|
||||
--client-key=kube-controller-manager.key \
|
||||
--embed-certs=true \
|
||||
--kubeconfig=kube-controller-manager.kubeconfig
|
||||
|
||||
kubectl config set-context default \
|
||||
--cluster=kubernetes-the-hard-way \
|
||||
--user=system:kube-controller-manager \
|
||||
--kubeconfig=kube-controller-manager.kubeconfig
|
||||
|
||||
kubectl config use-context default \
|
||||
--kubeconfig=kube-controller-manager.kubeconfig
|
||||
}
|
||||
```
|
||||
|
||||
Results:
|
||||
|
||||
```text
|
||||
kube-controller-manager.kubeconfig
|
||||
```
|
||||
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
|
||||
|
||||
|
||||
### The kube-scheduler Kubernetes Configuration File
|
||||
|
||||
Generate a kubeconfig file for the `kube-scheduler` service:
|
||||
|
||||
```bash
|
||||
{
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.crt \
|
||||
--embed-certs=true \
|
||||
--server=https://server.kubernetes.local:6443 \
|
||||
--kubeconfig=kube-scheduler.kubeconfig
|
||||
|
||||
kubectl config set-credentials system:kube-scheduler \
|
||||
--client-certificate=kube-scheduler.crt \
|
||||
--client-key=kube-scheduler.key \
|
||||
--embed-certs=true \
|
||||
--kubeconfig=kube-scheduler.kubeconfig
|
||||
|
||||
kubectl config set-context default \
|
||||
--cluster=kubernetes-the-hard-way \
|
||||
--user=system:kube-scheduler \
|
||||
--kubeconfig=kube-scheduler.kubeconfig
|
||||
|
||||
kubectl config use-context default \
|
||||
--kubeconfig=kube-scheduler.kubeconfig
|
||||
}
|
||||
```
|
||||
|
||||
Results:
|
||||
|
||||
```text
|
||||
kube-scheduler.kubeconfig
|
||||
```
|
||||
|
||||
### The admin Kubernetes Configuration File
|
||||
|
||||
Generate a kubeconfig file for the `admin` user:
|
||||
|
||||
```bash
|
||||
{
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.crt \
|
||||
--embed-certs=true \
|
||||
--server=https://127.0.0.1:6443 \
|
||||
--kubeconfig=admin.kubeconfig
|
||||
|
||||
kubectl config set-credentials admin \
|
||||
--client-certificate=admin.crt \
|
||||
--client-key=admin.key \
|
||||
--embed-certs=true \
|
||||
--kubeconfig=admin.kubeconfig
|
||||
|
||||
kubectl config set-context default \
|
||||
--cluster=kubernetes-the-hard-way \
|
||||
--user=admin \
|
||||
--kubeconfig=admin.kubeconfig
|
||||
|
||||
kubectl config use-context default \
|
||||
--kubeconfig=admin.kubeconfig
|
||||
}
|
||||
```
|
||||
|
||||
Results:
|
||||
|
||||
```text
|
||||
admin.kubeconfig
|
||||
```
|
||||
|
||||
## Distribute the Kubernetes Configuration Files
|
||||
|
||||
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
|
||||
Copy the `kubelet` and `kube-proxy` kubeconfig files to the `node-0` and `node-1` machines:
|
||||
|
||||
```
|
||||
for instance in worker-0 worker-1 worker-2; do
|
||||
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
|
||||
```bash
|
||||
for host in node-0 node-1; do
|
||||
ssh root@${host} "mkdir -p /var/lib/{kube-proxy,kubelet}"
|
||||
|
||||
scp kube-proxy.kubeconfig \
|
||||
root@${host}:/var/lib/kube-proxy/kubeconfig \
|
||||
|
||||
scp ${host}.kubeconfig \
|
||||
root@${host}:/var/lib/kubelet/kubeconfig
|
||||
done
|
||||
```
|
||||
|
||||
Copy the `kube-controller-manager` and `kube-scheduler` kubeconfig files to the `server` machine:
|
||||
|
||||
```bash
|
||||
scp admin.kubeconfig \
|
||||
kube-controller-manager.kubeconfig \
|
||||
kube-scheduler.kubeconfig \
|
||||
root@server:~/
|
||||
```
|
||||
|
||||
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
|
||||
|
@@ -8,36 +8,23 @@ In this lab you will generate an encryption key and an [encryption config](https
|
||||
|
||||
Generate an encryption key:
|
||||
|
||||
```
|
||||
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
|
||||
```bash
|
||||
export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
|
||||
```
|
||||
|
||||
## The Encryption Config File
|
||||
|
||||
Create the `encryption-config.yaml` encryption config file:
|
||||
|
||||
```
|
||||
cat > encryption-config.yaml <<EOF
|
||||
kind: EncryptionConfig
|
||||
apiVersion: v1
|
||||
resources:
|
||||
- resources:
|
||||
- secrets
|
||||
providers:
|
||||
- aescbc:
|
||||
keys:
|
||||
- name: key1
|
||||
secret: ${ENCRYPTION_KEY}
|
||||
- identity: {}
|
||||
EOF
|
||||
```bash
|
||||
envsubst < configs/encryption-config.yaml \
|
||||
> encryption-config.yaml
|
||||
```
|
||||
|
||||
Copy the `encryption-config.yaml` encryption config file to each controller instance:
|
||||
|
||||
```
|
||||
for instance in controller-0 controller-1 controller-2; do
|
||||
gcloud compute scp encryption-config.yaml ${instance}:~/
|
||||
done
|
||||
```bash
|
||||
scp encryption-config.yaml root@server:~/
|
||||
```
|
||||
|
||||
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
|
||||
|
@@ -1,128 +1,76 @@
|
||||
# Bootstrapping the etcd Cluster
|
||||
|
||||
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/coreos/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
|
||||
Kubernetes components are stateless and store cluster state in [etcd](https://github.com/etcd-io/etcd). In this lab you will bootstrap a single node etcd cluster.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
|
||||
Copy `etcd` binaries and systemd unit files to the `server` machine:
|
||||
|
||||
```
|
||||
gcloud compute ssh controller-0
|
||||
```bash
|
||||
scp \
|
||||
downloads/controller/etcd \
|
||||
downloads/client/etcdctl \
|
||||
units/etcd.service \
|
||||
root@server:~/
|
||||
```
|
||||
|
||||
## Bootstrapping an etcd Cluster Member
|
||||
|
||||
### Download and Install the etcd Binaries
|
||||
|
||||
Download the official etcd release binaries from the [coreos/etcd](https://github.com/coreos/etcd) GitHub project:
|
||||
The commands in this lab must be run on the `server` machine. Login to the `server` machine using the `ssh` command. Example:
|
||||
|
||||
```bash
|
||||
ssh root@server
|
||||
```
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
"https://github.com/coreos/etcd/releases/download/v3.2.11/etcd-v3.2.11-linux-amd64.tar.gz"
|
||||
```
|
||||
|
||||
## Bootstrapping an etcd Cluster
|
||||
|
||||
### Install the etcd Binaries
|
||||
|
||||
Extract and install the `etcd` server and the `etcdctl` command line utility:
|
||||
|
||||
```
|
||||
tar -xvf etcd-v3.2.11-linux-amd64.tar.gz
|
||||
```
|
||||
|
||||
```
|
||||
sudo mv etcd-v3.2.11-linux-amd64/etcd* /usr/local/bin/
|
||||
```bash
|
||||
{
|
||||
mv etcd etcdctl /usr/local/bin/
|
||||
}
|
||||
```
|
||||
|
||||
### Configure the etcd Server
|
||||
|
||||
```
|
||||
sudo mkdir -p /etc/etcd /var/lib/etcd
|
||||
```
|
||||
|
||||
```
|
||||
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
|
||||
```
|
||||
|
||||
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
|
||||
|
||||
```
|
||||
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
|
||||
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
|
||||
```bash
|
||||
{
|
||||
mkdir -p /etc/etcd /var/lib/etcd
|
||||
chmod 700 /var/lib/etcd
|
||||
cp ca.crt kube-api-server.key kube-api-server.crt \
|
||||
/etc/etcd/
|
||||
}
|
||||
```
|
||||
|
||||
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
|
||||
|
||||
```
|
||||
ETCD_NAME=$(hostname -s)
|
||||
```
|
||||
|
||||
Create the `etcd.service` systemd unit file:
|
||||
|
||||
```
|
||||
cat > etcd.service <<EOF
|
||||
[Unit]
|
||||
Description=etcd
|
||||
Documentation=https://github.com/coreos
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/etcd \\
|
||||
--name ${ETCD_NAME} \\
|
||||
--cert-file=/etc/etcd/kubernetes.pem \\
|
||||
--key-file=/etc/etcd/kubernetes-key.pem \\
|
||||
--peer-cert-file=/etc/etcd/kubernetes.pem \\
|
||||
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
|
||||
--trusted-ca-file=/etc/etcd/ca.pem \\
|
||||
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
|
||||
--peer-client-cert-auth \\
|
||||
--client-cert-auth \\
|
||||
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
|
||||
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
|
||||
--listen-client-urls https://${INTERNAL_IP}:2379,http://127.0.0.1:2379 \\
|
||||
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
|
||||
--initial-cluster-token etcd-cluster-0 \\
|
||||
--initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
|
||||
--initial-cluster-state new \\
|
||||
--data-dir=/var/lib/etcd
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
```bash
|
||||
mv etcd.service /etc/systemd/system/
|
||||
```
|
||||
|
||||
### Start the etcd Server
|
||||
|
||||
```bash
|
||||
{
|
||||
systemctl daemon-reload
|
||||
systemctl enable etcd
|
||||
systemctl start etcd
|
||||
}
|
||||
```
|
||||
sudo mv etcd.service /etc/systemd/system/
|
||||
```
|
||||
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
```
|
||||
|
||||
```
|
||||
sudo systemctl enable etcd
|
||||
```
|
||||
|
||||
```
|
||||
sudo systemctl start etcd
|
||||
```
|
||||
|
||||
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
|
||||
|
||||
## Verification
|
||||
|
||||
List the etcd cluster members:
|
||||
|
||||
```
|
||||
ETCDCTL_API=3 etcdctl member list
|
||||
```bash
|
||||
etcdctl member list
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
|
||||
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
|
||||
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379
|
||||
```text
|
||||
6702b0a34e2cfd39, started, controller, http://127.0.0.1:2380, http://127.0.0.1:2379, false
|
||||
```
|
||||
|
||||
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)
|
||||
|
@@ -1,314 +1,195 @@
|
||||
# Bootstrapping the Kubernetes Control Plane
|
||||
|
||||
In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
|
||||
In this lab you will bootstrap the Kubernetes control plane. The following components will be installed on the `server` machine: Kubernetes API Server, Scheduler, and Controller Manager.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
|
||||
Connect to the `jumpbox` and copy Kubernetes binaries and systemd unit files to the `server` machine:
|
||||
|
||||
```bash
|
||||
scp \
|
||||
downloads/controller/kube-apiserver \
|
||||
downloads/controller/kube-controller-manager \
|
||||
downloads/controller/kube-scheduler \
|
||||
downloads/client/kubectl \
|
||||
units/kube-apiserver.service \
|
||||
units/kube-controller-manager.service \
|
||||
units/kube-scheduler.service \
|
||||
configs/kube-scheduler.yaml \
|
||||
configs/kube-apiserver-to-kubelet.yaml \
|
||||
root@server:~/
|
||||
```
|
||||
gcloud compute ssh controller-0
|
||||
|
||||
The commands in this lab must be run on the `server` machine. Login to the `server` machine using the `ssh` command. Example:
|
||||
|
||||
```bash
|
||||
ssh root@server
|
||||
```
|
||||
|
||||
## Provision the Kubernetes Control Plane
|
||||
|
||||
### Download and Install the Kubernetes Controller Binaries
|
||||
Create the Kubernetes configuration directory:
|
||||
|
||||
Download the official Kubernetes release binaries:
|
||||
```bash
|
||||
mkdir -p /etc/kubernetes/config
|
||||
```
|
||||
|
||||
```
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-apiserver" \
|
||||
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-controller-manager" \
|
||||
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-scheduler" \
|
||||
"https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl"
|
||||
```
|
||||
### Install the Kubernetes Controller Binaries
|
||||
|
||||
Install the Kubernetes binaries:
|
||||
|
||||
```
|
||||
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
|
||||
```
|
||||
|
||||
```
|
||||
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
|
||||
```bash
|
||||
{
|
||||
mv kube-apiserver \
|
||||
kube-controller-manager \
|
||||
kube-scheduler kubectl \
|
||||
/usr/local/bin/
|
||||
}
|
||||
```
|
||||
|
||||
### Configure the Kubernetes API Server
|
||||
|
||||
```
|
||||
sudo mkdir -p /var/lib/kubernetes/
|
||||
```
|
||||
```bash
|
||||
{
|
||||
mkdir -p /var/lib/kubernetes/
|
||||
|
||||
```
|
||||
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem encryption-config.yaml /var/lib/kubernetes/
|
||||
```
|
||||
|
||||
The instance internal IP address will be used advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
|
||||
|
||||
```
|
||||
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
|
||||
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
|
||||
mv ca.crt ca.key \
|
||||
kube-api-server.key kube-api-server.crt \
|
||||
service-accounts.key service-accounts.crt \
|
||||
encryption-config.yaml \
|
||||
/var/lib/kubernetes/
|
||||
}
|
||||
```
|
||||
|
||||
Create the `kube-apiserver.service` systemd unit file:
|
||||
|
||||
```
|
||||
cat > kube-apiserver.service <<EOF
|
||||
[Unit]
|
||||
Description=Kubernetes API Server
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/kube-apiserver \\
|
||||
--admission-control=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
|
||||
--advertise-address=${INTERNAL_IP} \\
|
||||
--allow-privileged=true \\
|
||||
--apiserver-count=3 \\
|
||||
--audit-log-maxage=30 \\
|
||||
--audit-log-maxbackup=3 \\
|
||||
--audit-log-maxsize=100 \\
|
||||
--audit-log-path=/var/log/audit.log \\
|
||||
--authorization-mode=Node,RBAC \\
|
||||
--bind-address=0.0.0.0 \\
|
||||
--client-ca-file=/var/lib/kubernetes/ca.pem \\
|
||||
--enable-swagger-ui=true \\
|
||||
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
|
||||
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
|
||||
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
|
||||
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
|
||||
--event-ttl=1h \\
|
||||
--experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
|
||||
--insecure-bind-address=127.0.0.1 \\
|
||||
--kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
|
||||
--kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
|
||||
--kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
|
||||
--kubelet-https=true \\
|
||||
--runtime-config=api/all \\
|
||||
--service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
|
||||
--service-cluster-ip-range=10.32.0.0/24 \\
|
||||
--service-node-port-range=30000-32767 \\
|
||||
--tls-ca-file=/var/lib/kubernetes/ca.pem \\
|
||||
--tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
|
||||
--tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
```bash
|
||||
mv kube-apiserver.service \
|
||||
/etc/systemd/system/kube-apiserver.service
|
||||
```
|
||||
|
||||
### Configure the Kubernetes Controller Manager
|
||||
|
||||
Move the `kube-controller-manager` kubeconfig into place:
|
||||
|
||||
```bash
|
||||
mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
||||
```
|
||||
|
||||
Create the `kube-controller-manager.service` systemd unit file:
|
||||
|
||||
```
|
||||
cat > kube-controller-manager.service <<EOF
|
||||
[Unit]
|
||||
Description=Kubernetes Controller Manager
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/kube-controller-manager \\
|
||||
--address=0.0.0.0 \\
|
||||
--cluster-cidr=10.200.0.0/16 \\
|
||||
--cluster-name=kubernetes \\
|
||||
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
|
||||
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
|
||||
--leader-elect=true \\
|
||||
--master=http://127.0.0.1:8080 \\
|
||||
--root-ca-file=/var/lib/kubernetes/ca.pem \\
|
||||
--service-account-private-key-file=/var/lib/kubernetes/ca-key.pem \\
|
||||
--service-cluster-ip-range=10.32.0.0/24 \\
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
```bash
|
||||
mv kube-controller-manager.service /etc/systemd/system/
|
||||
```
|
||||
|
||||
### Configure the Kubernetes Scheduler
|
||||
|
||||
Move the `kube-scheduler` kubeconfig into place:
|
||||
|
||||
```bash
|
||||
mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
||||
```
|
||||
|
||||
Create the `kube-scheduler.yaml` configuration file:
|
||||
|
||||
```bash
|
||||
mv kube-scheduler.yaml /etc/kubernetes/config/
|
||||
```
|
||||
|
||||
Create the `kube-scheduler.service` systemd unit file:
|
||||
|
||||
```
|
||||
cat > kube-scheduler.service <<EOF
|
||||
[Unit]
|
||||
Description=Kubernetes Scheduler
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/kube-scheduler \\
|
||||
--leader-elect=true \\
|
||||
--master=http://127.0.0.1:8080 \\
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
```bash
|
||||
mv kube-scheduler.service /etc/systemd/system/
|
||||
```
|
||||
|
||||
### Start the Controller Services
|
||||
|
||||
```
|
||||
sudo mv kube-apiserver.service kube-scheduler.service kube-controller-manager.service /etc/systemd/system/
|
||||
```
|
||||
```bash
|
||||
{
|
||||
systemctl daemon-reload
|
||||
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
```
|
||||
systemctl enable kube-apiserver \
|
||||
kube-controller-manager kube-scheduler
|
||||
|
||||
```
|
||||
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
|
||||
```
|
||||
|
||||
```
|
||||
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
|
||||
systemctl start kube-apiserver \
|
||||
kube-controller-manager kube-scheduler
|
||||
}
|
||||
```
|
||||
|
||||
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
|
||||
|
||||
You can check if any of the control plane components are active using the `systemctl` command. For example, to check if the `kube-apiserver` fully initialized, and active, run the following command:
|
||||
|
||||
```bash
|
||||
systemctl is-active kube-apiserver
|
||||
```
|
||||
|
||||
For a more detailed status check, which includes additional process information and log messages, use the `systemctl status` command:
|
||||
|
||||
```bash
|
||||
systemctl status kube-apiserver
|
||||
```
|
||||
|
||||
If you run into any errors, or want to view the logs for any of the control plane components, use the `journalctl` command. For example, to view the logs for the `kube-apiserver` run the following command:
|
||||
|
||||
```bash
|
||||
journalctl -u kube-apiserver
|
||||
```
|
||||
|
||||
### Verification
|
||||
|
||||
```
|
||||
kubectl get componentstatuses
|
||||
At this point the Kubernetes control plane components should be up and running. Verify this using the `kubectl` command line tool:
|
||||
|
||||
```bash
|
||||
kubectl cluster-info \
|
||||
--kubeconfig admin.kubeconfig
|
||||
```
|
||||
|
||||
```text
|
||||
Kubernetes control plane is running at https://127.0.0.1:6443
|
||||
```
|
||||
NAME STATUS MESSAGE ERROR
|
||||
controller-manager Healthy ok
|
||||
scheduler Healthy ok
|
||||
etcd-2 Healthy {"health": "true"}
|
||||
etcd-0 Healthy {"health": "true"}
|
||||
etcd-1 Healthy {"health": "true"}
|
||||
```
|
||||
|
||||
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
|
||||
|
||||
## RBAC for Kubelet Authorization
|
||||
|
||||
In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
|
||||
|
||||
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
|
||||
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#checking-api-access) API to determine authorization.
|
||||
|
||||
```
|
||||
gcloud compute ssh controller-0
|
||||
The commands in this section will affect the entire cluster and only need to be run on the `server` machine.
|
||||
|
||||
```bash
|
||||
ssh root@server
|
||||
```
|
||||
|
||||
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
||||
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
||||
|
||||
```
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRole
|
||||
metadata:
|
||||
annotations:
|
||||
rbac.authorization.kubernetes.io/autoupdate: "true"
|
||||
labels:
|
||||
kubernetes.io/bootstrapping: rbac-defaults
|
||||
name: system:kube-apiserver-to-kubelet
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- nodes/proxy
|
||||
- nodes/stats
|
||||
- nodes/log
|
||||
- nodes/spec
|
||||
- nodes/metrics
|
||||
verbs:
|
||||
- "*"
|
||||
EOF
|
||||
```
|
||||
|
||||
The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
|
||||
|
||||
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
|
||||
|
||||
```
|
||||
cat <<EOF | kubectl apply -f -
|
||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||
kind: ClusterRoleBinding
|
||||
metadata:
|
||||
name: system:kube-apiserver
|
||||
namespace: ""
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: ClusterRole
|
||||
name: system:kube-apiserver-to-kubelet
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: kubernetes
|
||||
EOF
|
||||
```
|
||||
|
||||
## The Kubernetes Frontend Load Balancer
|
||||
|
||||
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer.
|
||||
|
||||
> The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
|
||||
|
||||
Create the external load balancer network resources:
|
||||
|
||||
```
|
||||
gcloud compute target-pools create kubernetes-target-pool
|
||||
```
|
||||
|
||||
```
|
||||
gcloud compute target-pools add-instances kubernetes-target-pool \
|
||||
--instances controller-0,controller-1,controller-2
|
||||
```
|
||||
|
||||
```
|
||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||
--region $(gcloud config get-value compute/region) \
|
||||
--format 'value(name)')
|
||||
```
|
||||
|
||||
```
|
||||
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
|
||||
--address ${KUBERNETES_PUBLIC_ADDRESS} \
|
||||
--ports 6443 \
|
||||
--region $(gcloud config get-value compute/region) \
|
||||
--target-pool kubernetes-target-pool
|
||||
```bash
|
||||
kubectl apply -f kube-apiserver-to-kubelet.yaml \
|
||||
--kubeconfig admin.kubeconfig
|
||||
```
|
||||
|
||||
### Verification
|
||||
|
||||
Retrieve the `kubernetes-the-hard-way` static IP address:
|
||||
|
||||
```
|
||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||
--region $(gcloud config get-value compute/region) \
|
||||
--format 'value(address)')
|
||||
```
|
||||
At this point the Kubernetes control plane is up and running. Run the following commands from the `jumpbox` machine to verify it's working:
|
||||
|
||||
Make a HTTP request for the Kubernetes version info:
|
||||
|
||||
```
|
||||
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
|
||||
```bash
|
||||
curl --cacert ca.crt \
|
||||
https://server.kubernetes.local:6443/version
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
```text
|
||||
{
|
||||
"major": "1",
|
||||
"minor": "9",
|
||||
"gitVersion": "v1.9.0",
|
||||
"gitCommit": "925c127ec6b946659ad0fd596fa959be43f0cc05",
|
||||
"minor": "32",
|
||||
"gitVersion": "v1.32.3",
|
||||
"gitCommit": "32cc146f75aad04beaaa245a7157eb35063a9f99",
|
||||
"gitTreeState": "clean",
|
||||
"buildDate": "2017-12-15T20:55:30Z",
|
||||
"goVersion": "go1.9.2",
|
||||
"buildDate": "2025-03-11T19:52:21Z",
|
||||
"goVersion": "go1.23.6",
|
||||
"compiler": "gc",
|
||||
"platform": "linux/amd64"
|
||||
"platform": "linux/arm64"
|
||||
}
|
||||
```
|
||||
|
||||
|
@@ -1,40 +1,91 @@
|
||||
# Bootstrapping the Kubernetes Worker Nodes
|
||||
|
||||
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [cri-containerd](https://github.com/kubernetes-incubator/cri-containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
|
||||
In this lab you will bootstrap two Kubernetes worker nodes. The following components will be installed: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
|
||||
The commands in this section must be run from the `jumpbox`.
|
||||
|
||||
Copy the Kubernetes binaries and systemd unit files to each worker instance:
|
||||
|
||||
```bash
|
||||
for HOST in node-0 node-1; do
|
||||
SUBNET=$(grep ${HOST} machines.txt | cut -d " " -f 4)
|
||||
sed "s|SUBNET|$SUBNET|g" \
|
||||
configs/10-bridge.conf > 10-bridge.conf
|
||||
|
||||
sed "s|SUBNET|$SUBNET|g" \
|
||||
configs/kubelet-config.yaml > kubelet-config.yaml
|
||||
|
||||
scp 10-bridge.conf kubelet-config.yaml \
|
||||
root@${HOST}:~/
|
||||
done
|
||||
```
|
||||
gcloud compute ssh worker-0
|
||||
|
||||
```bash
|
||||
for HOST in node-0 node-1; do
|
||||
scp \
|
||||
downloads/worker/* \
|
||||
downloads/client/kubectl \
|
||||
configs/99-loopback.conf \
|
||||
configs/containerd-config.toml \
|
||||
configs/kube-proxy-config.yaml \
|
||||
units/containerd.service \
|
||||
units/kubelet.service \
|
||||
units/kube-proxy.service \
|
||||
root@${HOST}:~/
|
||||
done
|
||||
```
|
||||
|
||||
```bash
|
||||
for HOST in node-0 node-1; do
|
||||
scp \
|
||||
downloads/cni-plugins/* \
|
||||
root@${HOST}:~/cni-plugins/
|
||||
done
|
||||
```
|
||||
|
||||
The commands in the next section must be run on each worker instance: `node-0`, `node-1`. Login to the worker instance using the `ssh` command. Example:
|
||||
|
||||
```bash
|
||||
ssh root@node-0
|
||||
```
|
||||
|
||||
## Provisioning a Kubernetes Worker Node
|
||||
|
||||
Install the OS dependencies:
|
||||
|
||||
```
|
||||
sudo apt-get -y install socat
|
||||
```bash
|
||||
{
|
||||
apt-get update
|
||||
apt-get -y install socat conntrack ipset kmod
|
||||
}
|
||||
```
|
||||
|
||||
> The socat binary enables support for the `kubectl port-forward` command.
|
||||
|
||||
### Download and Install Worker Binaries
|
||||
Disable Swap
|
||||
|
||||
Kubernetes has limited support for the use of swap memory, as it is difficult to provide guarantees and account for pod memory utilization when swap is involved.
|
||||
|
||||
Verify if swap is disabled:
|
||||
|
||||
```bash
|
||||
swapon --show
|
||||
```
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
|
||||
https://github.com/kubernetes-incubator/cri-containerd/releases/download/v1.0.0-beta.0/cri-containerd-1.0.0-beta.0.linux-amd64.tar.gz \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubectl \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kube-proxy \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.9.0/bin/linux/amd64/kubelet
|
||||
|
||||
If output is empty then swap is disabled. If swap is enabled run the following command to disable swap immediately:
|
||||
|
||||
```bash
|
||||
swapoff -a
|
||||
```
|
||||
|
||||
> To ensure swap remains off after reboot consult your Linux distro documentation.
|
||||
|
||||
Create the installation directories:
|
||||
|
||||
```
|
||||
sudo mkdir -p \
|
||||
```bash
|
||||
mkdir -p \
|
||||
/etc/cni/net.d \
|
||||
/opt/cni/bin \
|
||||
/var/lib/kubelet \
|
||||
@@ -45,191 +96,112 @@ sudo mkdir -p \
|
||||
|
||||
Install the worker binaries:
|
||||
|
||||
```
|
||||
sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
|
||||
```
|
||||
|
||||
```
|
||||
sudo tar -xvf cri-containerd-1.0.0-beta.0.linux-amd64.tar.gz -C /
|
||||
```
|
||||
|
||||
```
|
||||
chmod +x kubectl kube-proxy kubelet
|
||||
```
|
||||
|
||||
```
|
||||
sudo mv kubectl kube-proxy kubelet /usr/local/bin/
|
||||
```bash
|
||||
{
|
||||
mv crictl kube-proxy kubelet runc \
|
||||
/usr/local/bin/
|
||||
mv containerd containerd-shim-runc-v2 containerd-stress /bin/
|
||||
mv cni-plugins/* /opt/cni/bin/
|
||||
}
|
||||
```
|
||||
|
||||
### Configure CNI Networking
|
||||
|
||||
Retrieve the Pod CIDR range for the current compute instance:
|
||||
|
||||
```
|
||||
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
|
||||
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
|
||||
```
|
||||
|
||||
Create the `bridge` network configuration file:
|
||||
|
||||
```bash
|
||||
mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
|
||||
```
|
||||
cat > 10-bridge.conf <<EOF
|
||||
|
||||
To ensure network traffic crossing the CNI `bridge` network is processed by `iptables`, load and configure the `br-netfilter` kernel module:
|
||||
|
||||
```bash
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"name": "bridge",
|
||||
"type": "bridge",
|
||||
"bridge": "cnio0",
|
||||
"isGateway": true,
|
||||
"ipMasq": true,
|
||||
"ipam": {
|
||||
"type": "host-local",
|
||||
"ranges": [
|
||||
[{"subnet": "${POD_CIDR}"}]
|
||||
],
|
||||
"routes": [{"dst": "0.0.0.0/0"}]
|
||||
}
|
||||
modprobe br-netfilter
|
||||
echo "br-netfilter" >> /etc/modules-load.d/modules.conf
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Create the `loopback` network configuration file:
|
||||
|
||||
```
|
||||
cat > 99-loopback.conf <<EOF
|
||||
```bash
|
||||
{
|
||||
"cniVersion": "0.3.1",
|
||||
"type": "loopback"
|
||||
echo "net.bridge.bridge-nf-call-iptables = 1" \
|
||||
>> /etc/sysctl.d/kubernetes.conf
|
||||
echo "net.bridge.bridge-nf-call-ip6tables = 1" \
|
||||
>> /etc/sysctl.d/kubernetes.conf
|
||||
sysctl -p /etc/sysctl.d/kubernetes.conf
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
Move the network configuration files to the CNI configuration directory:
|
||||
### Configure containerd
|
||||
|
||||
```
|
||||
sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
|
||||
Install the `containerd` configuration files:
|
||||
|
||||
```bash
|
||||
{
|
||||
mkdir -p /etc/containerd/
|
||||
mv containerd-config.toml /etc/containerd/config.toml
|
||||
mv containerd.service /etc/systemd/system/
|
||||
}
|
||||
```
|
||||
|
||||
### Configure the Kubelet
|
||||
|
||||
```
|
||||
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
|
||||
```
|
||||
Create the `kubelet-config.yaml` configuration file:
|
||||
|
||||
```
|
||||
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
|
||||
```
|
||||
|
||||
```
|
||||
sudo mv ca.pem /var/lib/kubernetes/
|
||||
```
|
||||
|
||||
Create the `kubelet.service` systemd unit file:
|
||||
|
||||
```
|
||||
cat > kubelet.service <<EOF
|
||||
[Unit]
|
||||
Description=Kubernetes Kubelet
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
After=cri-containerd.service
|
||||
Requires=cri-containerd.service
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/kubelet \\
|
||||
--allow-privileged=true \\
|
||||
--anonymous-auth=false \\
|
||||
--authorization-mode=Webhook \\
|
||||
--client-ca-file=/var/lib/kubernetes/ca.pem \\
|
||||
--cloud-provider= \\
|
||||
--cluster-dns=10.32.0.10 \\
|
||||
--cluster-domain=cluster.local \\
|
||||
--container-runtime=remote \\
|
||||
--container-runtime-endpoint=unix:///var/run/cri-containerd.sock \\
|
||||
--image-pull-progress-deadline=2m \\
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
||||
--network-plugin=cni \\
|
||||
--pod-cidr=${POD_CIDR} \\
|
||||
--register-node=true \\
|
||||
--runtime-request-timeout=15m \\
|
||||
--tls-cert-file=/var/lib/kubelet/${HOSTNAME}.pem \\
|
||||
--tls-private-key-file=/var/lib/kubelet/${HOSTNAME}-key.pem \\
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
```bash
|
||||
{
|
||||
mv kubelet-config.yaml /var/lib/kubelet/
|
||||
mv kubelet.service /etc/systemd/system/
|
||||
}
|
||||
```
|
||||
|
||||
### Configure the Kubernetes Proxy
|
||||
|
||||
```
|
||||
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
||||
```
|
||||
|
||||
Create the `kube-proxy.service` systemd unit file:
|
||||
|
||||
```
|
||||
cat > kube-proxy.service <<EOF
|
||||
[Unit]
|
||||
Description=Kubernetes Kube Proxy
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/kube-proxy \\
|
||||
--cluster-cidr=10.200.0.0/16 \\
|
||||
--kubeconfig=/var/lib/kube-proxy/kubeconfig \\
|
||||
--proxy-mode=iptables \\
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
```bash
|
||||
{
|
||||
mv kube-proxy-config.yaml /var/lib/kube-proxy/
|
||||
mv kube-proxy.service /etc/systemd/system/
|
||||
}
|
||||
```
|
||||
|
||||
### Start the Worker Services
|
||||
|
||||
```
|
||||
sudo mv kubelet.service kube-proxy.service /etc/systemd/system/
|
||||
```bash
|
||||
{
|
||||
systemctl daemon-reload
|
||||
systemctl enable containerd kubelet kube-proxy
|
||||
systemctl start containerd kubelet kube-proxy
|
||||
}
|
||||
```
|
||||
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
Check if the kubelet service is running:
|
||||
|
||||
```bash
|
||||
systemctl is-active kubelet
|
||||
```
|
||||
|
||||
```
|
||||
sudo systemctl enable containerd cri-containerd kubelet kube-proxy
|
||||
```text
|
||||
active
|
||||
```
|
||||
|
||||
```
|
||||
sudo systemctl start containerd cri-containerd kubelet kube-proxy
|
||||
```
|
||||
|
||||
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
|
||||
Be sure to complete the steps in this section on each worker node, `node-0` and `node-1`, before moving on to the next section.
|
||||
|
||||
## Verification
|
||||
|
||||
Login to one of the controller nodes:
|
||||
|
||||
```
|
||||
gcloud compute ssh controller-0
|
||||
```
|
||||
Run the following commands from the `jumpbox` machine.
|
||||
|
||||
List the registered Kubernetes nodes:
|
||||
|
||||
```
|
||||
kubectl get nodes
|
||||
```bash
|
||||
ssh root@server \
|
||||
"kubectl get nodes \
|
||||
--kubeconfig admin.kubeconfig"
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
worker-0 Ready <none> 18s v1.9.0
|
||||
worker-1 Ready <none> 18s v1.9.0
|
||||
worker-2 Ready <none> 18s v1.9.0
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node-0 Ready <none> 1m v1.32.3
|
||||
node-1 Ready <none> 10s v1.32.3
|
||||
```
|
||||
|
||||
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
|
||||
|
@@ -2,77 +2,80 @@
|
||||
|
||||
In this lab you will generate a kubeconfig file for the `kubectl` command line utility based on the `admin` user credentials.
|
||||
|
||||
> Run the commands in this lab from the same directory used to generate the admin client certificates.
|
||||
> Run the commands in this lab from the `jumpbox` machine.
|
||||
|
||||
## The Admin Kubernetes Configuration File
|
||||
|
||||
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
|
||||
Each kubeconfig requires a Kubernetes API Server to connect to.
|
||||
|
||||
Retrieve the `kubernetes-the-hard-way` static IP address:
|
||||
You should be able to ping `server.kubernetes.local` based on the `/etc/hosts` DNS entry from a previous lab.
|
||||
|
||||
```bash
|
||||
curl --cacert ca.crt \
|
||||
https://server.kubernetes.local:6443/version
|
||||
```
|
||||
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
|
||||
--region $(gcloud config get-value compute/region) \
|
||||
--format 'value(address)')
|
||||
|
||||
```text
|
||||
{
|
||||
"major": "1",
|
||||
"minor": "32",
|
||||
"gitVersion": "v1.32.3",
|
||||
"gitCommit": "32cc146f75aad04beaaa245a7157eb35063a9f99",
|
||||
"gitTreeState": "clean",
|
||||
"buildDate": "2025-03-11T19:52:21Z",
|
||||
"goVersion": "go1.23.6",
|
||||
"compiler": "gc",
|
||||
"platform": "linux/arm64"
|
||||
}
|
||||
```
|
||||
|
||||
Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
||||
|
||||
```
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.pem \
|
||||
--embed-certs=true \
|
||||
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
|
||||
```
|
||||
```bash
|
||||
{
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.crt \
|
||||
--embed-certs=true \
|
||||
--server=https://server.kubernetes.local:6443
|
||||
|
||||
```
|
||||
kubectl config set-credentials admin \
|
||||
--client-certificate=admin.pem \
|
||||
--client-key=admin-key.pem
|
||||
```
|
||||
kubectl config set-credentials admin \
|
||||
--client-certificate=admin.crt \
|
||||
--client-key=admin.key
|
||||
|
||||
```
|
||||
kubectl config set-context kubernetes-the-hard-way \
|
||||
--cluster=kubernetes-the-hard-way \
|
||||
--user=admin
|
||||
```
|
||||
kubectl config set-context kubernetes-the-hard-way \
|
||||
--cluster=kubernetes-the-hard-way \
|
||||
--user=admin
|
||||
|
||||
kubectl config use-context kubernetes-the-hard-way
|
||||
}
|
||||
```
|
||||
kubectl config use-context kubernetes-the-hard-way
|
||||
```
|
||||
The results of running the command above should create a kubeconfig file in the default location `~/.kube/config` used by the `kubectl` commandline tool. This also means you can run the `kubectl` command without specifying a config.
|
||||
|
||||
|
||||
## Verification
|
||||
|
||||
Check the health of the remote Kubernetes cluster:
|
||||
Check the version of the remote Kubernetes cluster:
|
||||
|
||||
```
|
||||
kubectl get componentstatuses
|
||||
```bash
|
||||
kubectl version
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
NAME STATUS MESSAGE ERROR
|
||||
controller-manager Healthy ok
|
||||
scheduler Healthy ok
|
||||
etcd-2 Healthy {"health": "true"}
|
||||
etcd-0 Healthy {"health": "true"}
|
||||
etcd-1 Healthy {"health": "true"}
|
||||
```text
|
||||
Client Version: v1.32.3
|
||||
Kustomize Version: v5.5.0
|
||||
Server Version: v1.32.3
|
||||
```
|
||||
|
||||
List the nodes in the remote Kubernetes cluster:
|
||||
|
||||
```
|
||||
```bash
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
worker-0 Ready <none> 1m v1.9.0
|
||||
worker-1 Ready <none> 1m v1.9.0
|
||||
worker-2 Ready <none> 1m v1.9.0
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node-0 Ready <none> 10m v1.32.3
|
||||
node-1 Ready <none> 10m v1.32.3
|
||||
```
|
||||
|
||||
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
|
||||
|
@@ -12,49 +12,67 @@ In this section you will gather the information required to create routes in the
|
||||
|
||||
Print the internal IP address and Pod CIDR range for each worker instance:
|
||||
|
||||
```
|
||||
for instance in worker-0 worker-1 worker-2; do
|
||||
gcloud compute instances describe ${instance} \
|
||||
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
|
||||
done
|
||||
```bash
|
||||
{
|
||||
SERVER_IP=$(grep server machines.txt | cut -d " " -f 1)
|
||||
NODE_0_IP=$(grep node-0 machines.txt | cut -d " " -f 1)
|
||||
NODE_0_SUBNET=$(grep node-0 machines.txt | cut -d " " -f 4)
|
||||
NODE_1_IP=$(grep node-1 machines.txt | cut -d " " -f 1)
|
||||
NODE_1_SUBNET=$(grep node-1 machines.txt | cut -d " " -f 4)
|
||||
}
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
10.240.0.20 10.200.0.0/24
|
||||
10.240.0.21 10.200.1.0/24
|
||||
10.240.0.22 10.200.2.0/24
|
||||
```bash
|
||||
ssh root@server <<EOF
|
||||
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
|
||||
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
|
||||
EOF
|
||||
```
|
||||
|
||||
## Routes
|
||||
|
||||
Create network routes for each worker instance:
|
||||
|
||||
```
|
||||
for i in 0 1 2; do
|
||||
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
|
||||
--network kubernetes-the-hard-way \
|
||||
--next-hop-address 10.240.0.2${i} \
|
||||
--destination-range 10.200.${i}.0/24
|
||||
done
|
||||
```bash
|
||||
ssh root@node-0 <<EOF
|
||||
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
|
||||
EOF
|
||||
```
|
||||
|
||||
List the routes in the `kubernetes-the-hard-way` VPC network:
|
||||
|
||||
```
|
||||
gcloud compute routes list --filter "network: kubernetes-the-hard-way"
|
||||
```bash
|
||||
ssh root@node-1 <<EOF
|
||||
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
|
||||
EOF
|
||||
```
|
||||
|
||||
> output
|
||||
## Verification
|
||||
|
||||
```
|
||||
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
|
||||
default-route-236a40a8bc992b5b kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
|
||||
default-route-df77b1e818a56b30 kubernetes-the-hard-way 10.240.0.0/24 1000
|
||||
kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
|
||||
kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000
|
||||
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
|
||||
```bash
|
||||
ssh root@server ip route
|
||||
```
|
||||
|
||||
Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)
|
||||
```text
|
||||
default via XXX.XXX.XXX.XXX dev ens160
|
||||
10.200.0.0/24 via XXX.XXX.XXX.XXX dev ens160
|
||||
10.200.1.0/24 via XXX.XXX.XXX.XXX dev ens160
|
||||
XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
|
||||
```
|
||||
|
||||
```bash
|
||||
ssh root@node-0 ip route
|
||||
```
|
||||
|
||||
```text
|
||||
default via XXX.XXX.XXX.XXX dev ens160
|
||||
10.200.1.0/24 via XXX.XXX.XXX.XXX dev ens160
|
||||
XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
|
||||
```
|
||||
|
||||
```bash
|
||||
ssh root@node-1 ip route
|
||||
```
|
||||
|
||||
```text
|
||||
default via XXX.XXX.XXX.XXX dev ens160
|
||||
10.200.0.0/24 via XXX.XXX.XXX.XXX dev ens160
|
||||
XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
|
||||
```
|
||||
|
||||
|
||||
Next: [Smoke Test](12-smoke-test.md)
|
||||
|
@@ -1,79 +0,0 @@
|
||||
# Deploying the DNS Cluster Add-on
|
||||
|
||||
In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery to applications running inside the Kubernetes cluster.
|
||||
|
||||
## The DNS Cluster Add-on
|
||||
|
||||
Deploy the `kube-dns` cluster add-on:
|
||||
|
||||
```
|
||||
kubectl create -f https://storage.googleapis.com/kubernetes-the-hard-way/kube-dns.yaml
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
serviceaccount "kube-dns" created
|
||||
configmap "kube-dns" created
|
||||
service "kube-dns" created
|
||||
deployment "kube-dns" created
|
||||
```
|
||||
|
||||
List the pods created by the `kube-dns` deployment:
|
||||
|
||||
```
|
||||
kubectl get pods -l k8s-app=kube-dns -n kube-system
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-dns-3097350089-gq015 3/3 Running 0 20s
|
||||
kube-dns-3097350089-q64qc 3/3 Running 0 20s
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
Create a `busybox` deployment:
|
||||
|
||||
```
|
||||
kubectl run busybox --image=busybox --command -- sleep 3600
|
||||
```
|
||||
|
||||
List the pod created by the `busybox` deployment:
|
||||
|
||||
```
|
||||
kubectl get pods -l run=busybox
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
busybox-2125412808-mt2vb 1/1 Running 0 15s
|
||||
```
|
||||
|
||||
Retrieve the full name of the `busybox` pod:
|
||||
|
||||
```
|
||||
POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
|
||||
```
|
||||
|
||||
Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
|
||||
|
||||
```
|
||||
kubectl exec -ti $POD_NAME -- nslookup kubernetes
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
Server: 10.32.0.10
|
||||
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
|
||||
|
||||
Name: kubernetes
|
||||
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
|
||||
```
|
||||
|
||||
Next: [Smoke Test](13-smoke-test.md)
|
@@ -8,37 +8,42 @@ In this section you will verify the ability to [encrypt secret data at rest](htt
|
||||
|
||||
Create a generic secret:
|
||||
|
||||
```
|
||||
```bash
|
||||
kubectl create secret generic kubernetes-the-hard-way \
|
||||
--from-literal="mykey=mydata"
|
||||
```
|
||||
|
||||
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
|
||||
|
||||
```
|
||||
gcloud compute ssh controller-0 \
|
||||
--command "ETCDCTL_API=3 etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
|
||||
```bash
|
||||
ssh root@server \
|
||||
'etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C'
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
```text
|
||||
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
|
||||
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
|
||||
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
|
||||
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
|
||||
00000040 3a 76 31 3a 6b 65 79 31 3a ea 7c 76 32 43 62 6f |:v1:key1:.|v2Cbo|
|
||||
00000050 44 02 02 8c b7 ca fe 95 a5 33 f6 a1 18 6c 3d 53 |D........3...l=S|
|
||||
00000060 e7 9c 51 ee 32 f6 e4 17 ea bb 11 d5 2f e2 40 00 |..Q.2......./.@.|
|
||||
00000070 ae cf d9 e7 ba 7f 68 18 d3 c1 10 10 93 43 35 bd |......h......C5.|
|
||||
00000080 24 dd 66 b4 f8 f9 82 77 4a d5 78 03 19 41 1e bc |$.f....wJ.x..A..|
|
||||
00000090 94 3f 17 41 ad cc 8c ba 9f 8f 8e 56 97 7e 96 fb |.?.A.......V.~..|
|
||||
000000a0 8f 2e 6a a5 bf 08 1f 0b c3 4b 2b 93 d1 ec f8 70 |..j......K+....p|
|
||||
000000b0 c1 e4 1d 1a d2 0d f8 74 3a a1 4f 3c e0 c9 6d 3f |.......t:.O<..m?|
|
||||
000000c0 de a3 f5 fd 76 aa 5e bc 27 d9 3c 6b 8f 54 97 45 |....v.^.'.<k.T.E|
|
||||
000000d0 31 25 ff 23 90 a4 2a f2 db 78 b1 3b ca 21 f3 6b |1%.#..*..x.;.!.k|
|
||||
000000e0 dd fb 8e 53 c6 23 0d 35 c8 0a |...S.#.5..|
|
||||
000000ea
|
||||
00000040 3a 76 31 3a 6b 65 79 31 3a 4f 1b 80 d8 89 72 f4 |:v1:key1:O....r.|
|
||||
00000050 60 8a 2c a0 76 1a e1 dc 98 d6 00 7a a4 2f f3 92 |`.,.v......z./..|
|
||||
00000060 87 63 c9 22 f4 58 c8 27 b9 ff 2c 2e 1a b6 55 be |.c.".X.'..,...U.|
|
||||
00000070 d5 5c 4d 69 82 2f b7 e4 b3 b0 12 e1 58 c4 9c 77 |.\Mi./......X..w|
|
||||
00000080 78 0c 1a 90 c9 c1 23 6c 73 8e 6e fd 8e 9c 3d 84 |x.....#ls.n...=.|
|
||||
00000090 7d bf 69 81 ce c9 aa 38 be 3b dd 66 aa a3 33 27 |}.i....8.;.f..3'|
|
||||
000000a0 df be 6d ac 1c 6d 8a 82 df b3 19 da 0f 93 94 1e |..m..m..........|
|
||||
000000b0 e0 7d 46 8d b5 14 d0 c5 97 e2 94 76 26 a8 cb 33 |.}F........v&..3|
|
||||
000000c0 57 2a d0 27 a6 5a e1 76 a7 3f f0 b7 0a 7b ff 53 |W*.'.Z.v.?...{.S|
|
||||
000000d0 cf c9 1a 18 5b 45 f8 b1 06 3b a9 45 02 76 23 61 |....[E...;.E.v#a|
|
||||
000000e0 5e dc 86 cf 8e a4 d3 c9 5c 6a 6f e6 33 7b 5b 8f |^.......\jo.3{[.|
|
||||
000000f0 fb 8a 14 74 58 f9 49 2f 97 98 cc 5c d4 4a 10 1a |...tX.I/...\.J..|
|
||||
00000100 64 0a 79 21 68 a0 9e 7a 03 b7 19 e6 20 e4 1b ce |d.y!h..z.... ...|
|
||||
00000110 91 64 ce 90 d9 4f 86 ca fb 45 2f d6 56 93 68 e1 |.d...O...E/.V.h.|
|
||||
00000120 0b aa 8c a0 20 a6 97 fa a1 de 07 6d 5b 4c 02 96 |.... ......m[L..|
|
||||
00000130 31 70 20 83 16 f9 0a 22 5c 63 ad f1 ea 41 a7 1e |1p ...."\c...A..|
|
||||
00000140 29 1a d4 a4 e9 d7 0c 04 74 66 04 6d 73 d8 2e 3f |).......tf.ms..?|
|
||||
00000150 f0 b9 2f 77 bd 07 d7 7c 42 0a |../w...|B.|
|
||||
0000015a
|
||||
```
|
||||
|
||||
The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
|
||||
@@ -49,21 +54,20 @@ In this section you will verify the ability to create and manage [Deployments](h
|
||||
|
||||
Create a deployment for the [nginx](https://nginx.org/en/) web server:
|
||||
|
||||
```
|
||||
kubectl run nginx --image=nginx
|
||||
```bash
|
||||
kubectl create deployment nginx \
|
||||
--image=nginx:latest
|
||||
```
|
||||
|
||||
List the pod created by the `nginx` deployment:
|
||||
|
||||
```
|
||||
kubectl get pods -l run=nginx
|
||||
```bash
|
||||
kubectl get pods -l app=nginx
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-4217019353-b5gzn 1/1 Running 0 15s
|
||||
```bash
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
nginx-56fcf95486-c8dnx 1/1 Running 0 8s
|
||||
```
|
||||
|
||||
### Port Forwarding
|
||||
@@ -72,46 +76,43 @@ In this section you will verify the ability to access applications remotely usin
|
||||
|
||||
Retrieve the full name of the `nginx` pod:
|
||||
|
||||
```
|
||||
POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")
|
||||
```bash
|
||||
POD_NAME=$(kubectl get pods -l app=nginx \
|
||||
-o jsonpath="{.items[0].metadata.name}")
|
||||
```
|
||||
|
||||
Forward port `8080` on your local machine to port `80` of the `nginx` pod:
|
||||
|
||||
```
|
||||
```bash
|
||||
kubectl port-forward $POD_NAME 8080:80
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
```text
|
||||
Forwarding from 127.0.0.1:8080 -> 80
|
||||
Forwarding from [::1]:8080 -> 80
|
||||
```
|
||||
|
||||
In a new terminal make an HTTP request using the forwarding address:
|
||||
|
||||
```
|
||||
```bash
|
||||
curl --head http://127.0.0.1:8080
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
```text
|
||||
HTTP/1.1 200 OK
|
||||
Server: nginx/1.13.7
|
||||
Date: Mon, 18 Dec 2017 14:50:36 GMT
|
||||
Server: nginx/1.27.4
|
||||
Date: Sun, 06 Apr 2025 17:17:12 GMT
|
||||
Content-Type: text/html
|
||||
Content-Length: 612
|
||||
Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT
|
||||
Content-Length: 615
|
||||
Last-Modified: Wed, 05 Feb 2025 11:06:32 GMT
|
||||
Connection: keep-alive
|
||||
ETag: "5a1437f4-264"
|
||||
ETag: "67a34638-267"
|
||||
Accept-Ranges: bytes
|
||||
```
|
||||
|
||||
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
|
||||
|
||||
```
|
||||
```text
|
||||
Forwarding from 127.0.0.1:8080 -> 80
|
||||
Forwarding from [::1]:8080 -> 80
|
||||
Handling connection for 8080
|
||||
@@ -124,14 +125,13 @@ In this section you will verify the ability to [retrieve container logs](https:/
|
||||
|
||||
Print the `nginx` pod logs:
|
||||
|
||||
```
|
||||
```bash
|
||||
kubectl logs $POD_NAME
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
127.0.0.1 - - [18/Dec/2017:14:50:36 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"
|
||||
```text
|
||||
...
|
||||
127.0.0.1 - - [06/Apr/2025:17:17:12 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.88.1" "-"
|
||||
```
|
||||
|
||||
### Exec
|
||||
@@ -140,14 +140,12 @@ In this section you will verify the ability to [execute commands in a container]
|
||||
|
||||
Print the nginx version by executing the `nginx -v` command in the `nginx` container:
|
||||
|
||||
```
|
||||
```bash
|
||||
kubectl exec -ti $POD_NAME -- nginx -v
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
nginx version: nginx/1.13.7
|
||||
```text
|
||||
nginx version: nginx/1.27.4
|
||||
```
|
||||
|
||||
## Services
|
||||
@@ -156,52 +154,43 @@ In this section you will verify the ability to expose applications using a [Serv
|
||||
|
||||
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
|
||||
|
||||
```
|
||||
kubectl expose deployment nginx --port 80 --type NodePort
|
||||
```bash
|
||||
kubectl expose deployment nginx \
|
||||
--port 80 --type NodePort
|
||||
```
|
||||
|
||||
> The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
|
||||
|
||||
Retrieve the node port assigned to the `nginx` service:
|
||||
|
||||
```
|
||||
```bash
|
||||
NODE_PORT=$(kubectl get svc nginx \
|
||||
--output=jsonpath='{range .spec.ports[0]}{.nodePort}')
|
||||
```
|
||||
|
||||
Create a firewall rule that allows remote access to the `nginx` node port:
|
||||
Retrieve the hostname of the node running the `nginx` pod:
|
||||
|
||||
```
|
||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
|
||||
--allow=tcp:${NODE_PORT} \
|
||||
--network kubernetes-the-hard-way
|
||||
```bash
|
||||
NODE_NAME=$(kubectl get pods \
|
||||
-l app=nginx \
|
||||
-o jsonpath="{.items[0].spec.nodeName}")
|
||||
```
|
||||
|
||||
Retrieve the external IP address of a worker instance:
|
||||
Make an HTTP request using the IP address and the `nginx` node port:
|
||||
|
||||
```
|
||||
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
|
||||
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
|
||||
```bash
|
||||
curl -I http://${NODE_NAME}:${NODE_PORT}
|
||||
```
|
||||
|
||||
Make an HTTP request using the external IP address and the `nginx` node port:
|
||||
|
||||
```
|
||||
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
HTTP/1.1 200 OK
|
||||
Server: nginx/1.13.7
|
||||
Date: Mon, 18 Dec 2017 14:52:09 GMT
|
||||
```text
|
||||
Server: nginx/1.27.4
|
||||
Date: Sun, 06 Apr 2025 17:18:36 GMT
|
||||
Content-Type: text/html
|
||||
Content-Length: 612
|
||||
Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT
|
||||
Content-Length: 615
|
||||
Last-Modified: Wed, 05 Feb 2025 11:06:32 GMT
|
||||
Connection: keep-alive
|
||||
ETag: "5a1437f4-264"
|
||||
ETag: "67a34638-267"
|
||||
Accept-Ranges: bytes
|
||||
```
|
||||
|
||||
Next: [Cleaning Up](14-cleanup.md)
|
||||
Next: [Cleaning Up](13-cleanup.md)
|
11
docs/13-cleanup.md
Normal file
11
docs/13-cleanup.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# Cleaning Up
|
||||
|
||||
In this lab you will delete the compute resources created during this tutorial.
|
||||
|
||||
## Compute Instances
|
||||
|
||||
Previous versions of this guide made use of GCP resources for various aspects of compute and networking. The current version is agnostic, and all configuration is performed on the `jumpbox`, `server`, or nodes.
|
||||
|
||||
Clean up is as simple as deleting all virtual machines you created for this exercise.
|
||||
|
||||
Next: [Start Over](../README.md)
|
@@ -1,62 +0,0 @@
|
||||
# Cleaning Up
|
||||
|
||||
In this labs you will delete the compute resources created during this tutorial.
|
||||
|
||||
## Compute Instances
|
||||
|
||||
Delete the controller and worker compute instances:
|
||||
|
||||
```
|
||||
gcloud -q compute instances delete \
|
||||
controller-0 controller-1 controller-2 \
|
||||
worker-0 worker-1 worker-2
|
||||
```
|
||||
|
||||
## Networking
|
||||
|
||||
Delete the external load balancer network resources:
|
||||
|
||||
```
|
||||
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
|
||||
--region $(gcloud config get-value compute/region)
|
||||
```
|
||||
|
||||
```
|
||||
gcloud -q compute target-pools delete kubernetes-target-pool
|
||||
```
|
||||
|
||||
Delete the `kubernetes-the-hard-way` static IP address:
|
||||
|
||||
```
|
||||
gcloud -q compute addresses delete kubernetes-the-hard-way
|
||||
```
|
||||
|
||||
Delete the `kubernetes-the-hard-way` firewall rules:
|
||||
|
||||
```
|
||||
gcloud -q compute firewall-rules delete \
|
||||
kubernetes-the-hard-way-allow-nginx-service \
|
||||
kubernetes-the-hard-way-allow-internal \
|
||||
kubernetes-the-hard-way-allow-external
|
||||
```
|
||||
|
||||
Delete the Pod network routes:
|
||||
|
||||
```
|
||||
gcloud -q compute routes delete \
|
||||
kubernetes-route-10-200-0-0-24 \
|
||||
kubernetes-route-10-200-1-0-24 \
|
||||
kubernetes-route-10-200-2-0-24
|
||||
```
|
||||
|
||||
Delete the `kubernetes` subnet:
|
||||
|
||||
```
|
||||
gcloud -q compute networks subnets delete kubernetes
|
||||
```
|
||||
|
||||
Delete the `kubernetes-the-hard-way` network VPC:
|
||||
|
||||
```
|
||||
gcloud -q compute networks delete kubernetes-the-hard-way
|
||||
```
|
11
downloads-amd64.txt
Normal file
11
downloads-amd64.txt
Normal file
@@ -0,0 +1,11 @@
|
||||
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kubectl
|
||||
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-apiserver
|
||||
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-controller-manager
|
||||
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-scheduler
|
||||
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-proxy
|
||||
https://dl.k8s.io/v1.32.3/bin/linux/amd64/kubelet
|
||||
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.32.0/crictl-v1.32.0-linux-amd64.tar.gz
|
||||
https://github.com/opencontainers/runc/releases/download/v1.3.0-rc.1/runc.amd64
|
||||
https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz
|
||||
https://github.com/containerd/containerd/releases/download/v2.1.0-beta.0/containerd-2.1.0-beta.0-linux-amd64.tar.gz
|
||||
https://github.com/etcd-io/etcd/releases/download/v3.6.0-rc.3/etcd-v3.6.0-rc.3-linux-amd64.tar.gz
|
11
downloads-arm64.txt
Normal file
11
downloads-arm64.txt
Normal file
@@ -0,0 +1,11 @@
|
||||
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kubectl
|
||||
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kube-apiserver
|
||||
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kube-controller-manager
|
||||
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kube-scheduler
|
||||
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kube-proxy
|
||||
https://dl.k8s.io/v1.32.3/bin/linux/arm64/kubelet
|
||||
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.32.0/crictl-v1.32.0-linux-arm64.tar.gz
|
||||
https://github.com/opencontainers/runc/releases/download/v1.3.0-rc.1/runc.arm64
|
||||
https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-arm64-v1.6.2.tgz
|
||||
https://github.com/containerd/containerd/releases/download/v2.1.0-beta.0/containerd-2.1.0-beta.0-linux-arm64.tar.gz
|
||||
https://github.com/etcd-io/etcd/releases/download/v3.6.0-rc.3/etcd-v3.6.0-rc.3-linux-arm64.tar.gz
|
19
units/containerd.service
Normal file
19
units/containerd.service
Normal file
@@ -0,0 +1,19 @@
|
||||
[Unit]
|
||||
Description=containerd container runtime
|
||||
Documentation=https://containerd.io
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
ExecStartPre=/sbin/modprobe overlay
|
||||
ExecStart=/bin/containerd
|
||||
Restart=always
|
||||
RestartSec=5
|
||||
Delegate=yes
|
||||
KillMode=process
|
||||
OOMScoreAdjust=-999
|
||||
LimitNOFILE=1048576
|
||||
LimitNPROC=infinity
|
||||
LimitCORE=infinity
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
21
units/etcd.service
Normal file
21
units/etcd.service
Normal file
@@ -0,0 +1,21 @@
|
||||
[Unit]
|
||||
Description=etcd
|
||||
Documentation=https://github.com/etcd-io/etcd
|
||||
|
||||
[Service]
|
||||
Type=notify
|
||||
ExecStart=/usr/local/bin/etcd \
|
||||
--name controller \
|
||||
--initial-advertise-peer-urls http://127.0.0.1:2380 \
|
||||
--listen-peer-urls http://127.0.0.1:2380 \
|
||||
--listen-client-urls http://127.0.0.1:2379 \
|
||||
--advertise-client-urls http://127.0.0.1:2379 \
|
||||
--initial-cluster-token etcd-cluster-0 \
|
||||
--initial-cluster controller=http://127.0.0.1:2380 \
|
||||
--initial-cluster-state new \
|
||||
--data-dir=/var/lib/etcd
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
34
units/kube-apiserver.service
Normal file
34
units/kube-apiserver.service
Normal file
@@ -0,0 +1,34 @@
|
||||
[Unit]
|
||||
Description=Kubernetes API Server
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/kube-apiserver \
|
||||
--allow-privileged=true \
|
||||
--audit-log-maxage=30 \
|
||||
--audit-log-maxbackup=3 \
|
||||
--audit-log-maxsize=100 \
|
||||
--audit-log-path=/var/log/audit.log \
|
||||
--authorization-mode=Node,RBAC \
|
||||
--bind-address=0.0.0.0 \
|
||||
--client-ca-file=/var/lib/kubernetes/ca.crt \
|
||||
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
|
||||
--etcd-servers=http://127.0.0.1:2379 \
|
||||
--event-ttl=1h \
|
||||
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
|
||||
--kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \
|
||||
--kubelet-client-certificate=/var/lib/kubernetes/kube-api-server.crt \
|
||||
--kubelet-client-key=/var/lib/kubernetes/kube-api-server.key \
|
||||
--runtime-config='api/all=true' \
|
||||
--service-account-key-file=/var/lib/kubernetes/service-accounts.crt \
|
||||
--service-account-signing-key-file=/var/lib/kubernetes/service-accounts.key \
|
||||
--service-account-issuer=https://server.kubernetes.local:6443 \
|
||||
--service-node-port-range=30000-32767 \
|
||||
--tls-cert-file=/var/lib/kubernetes/kube-api-server.crt \
|
||||
--tls-private-key-file=/var/lib/kubernetes/kube-api-server.key \
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
22
units/kube-controller-manager.service
Normal file
22
units/kube-controller-manager.service
Normal file
@@ -0,0 +1,22 @@
|
||||
[Unit]
|
||||
Description=Kubernetes Controller Manager
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/kube-controller-manager \
|
||||
--bind-address=0.0.0.0 \
|
||||
--cluster-cidr=10.200.0.0/16 \
|
||||
--cluster-name=kubernetes \
|
||||
--cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \
|
||||
--cluster-signing-key-file=/var/lib/kubernetes/ca.key \
|
||||
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
|
||||
--root-ca-file=/var/lib/kubernetes/ca.crt \
|
||||
--service-account-private-key-file=/var/lib/kubernetes/service-accounts.key \
|
||||
--service-cluster-ip-range=10.32.0.0/24 \
|
||||
--use-service-account-credentials=true \
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
12
units/kube-proxy.service
Normal file
12
units/kube-proxy.service
Normal file
@@ -0,0 +1,12 @@
|
||||
[Unit]
|
||||
Description=Kubernetes Kube Proxy
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/kube-proxy \
|
||||
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
13
units/kube-scheduler.service
Normal file
13
units/kube-scheduler.service
Normal file
@@ -0,0 +1,13 @@
|
||||
[Unit]
|
||||
Description=Kubernetes Scheduler
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/kube-scheduler \
|
||||
--config=/etc/kubernetes/config/kube-scheduler.yaml \
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
16
units/kubelet.service
Normal file
16
units/kubelet.service
Normal file
@@ -0,0 +1,16 @@
|
||||
[Unit]
|
||||
Description=Kubernetes Kubelet
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
After=containerd.service
|
||||
Requires=containerd.service
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/kubelet \
|
||||
--config=/var/lib/kubelet/kubelet-config.yaml \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
Reference in New Issue
Block a user