mirror of
https://github.com/kelseyhightower/kubernetes-the-hard-way.git
synced 2025-12-15 17:28:58 +03:00
Refresh and add Apple Silicon (#338)
* Delete CKA stuff. It's covered in CKA repo * Rename nodes * Cluster up again * Update issue template * Update README * Begin rearranging docs * Update links * Initial mac instructions * iterm2 image * update ssh-copy-id to be cross platform * remove vagrant specific * Apple scripts WIP * Add var for architecture * order input files * Apple build working! * auto-locate docs * install sshpass * Set execute bit * apple done! * install sshpass * edits * Corrections * kube version output * Adjustments * Adjustments
This commit is contained in:
@@ -1,82 +0,0 @@
|
||||
# Prerequisites
|
||||
|
||||
## VM Hardware Requirements
|
||||
|
||||
- 8 GB of RAM (preferably 16 GB)
|
||||
- 50 GB disk space
|
||||
|
||||
## VirtualBox
|
||||
|
||||
Download and install [VirtualBox](https://www.virtualbox.org/wiki/Downloads) on any one of the supported platforms:
|
||||
|
||||
- Windows hosts
|
||||
- OS X hosts (x86 only, not Apple Silicon M-series)
|
||||
- Linux distributions
|
||||
- Solaris hosts
|
||||
|
||||
## Vagrant
|
||||
|
||||
Once VirtualBox is installed you may chose to deploy virtual machines manually on it.
|
||||
Vagrant provides an easier way to deploy multiple virtual machines on VirtualBox more consistently.
|
||||
|
||||
Download and install [Vagrant](https://www.vagrantup.com/) on your platform.
|
||||
|
||||
- Windows
|
||||
- Debian
|
||||
- Centos
|
||||
- Linux
|
||||
- macOS (x86 only, not M1)
|
||||
|
||||
This tutorial assumes that you have also installed Vagrant.
|
||||
|
||||
|
||||
## Lab Defaults
|
||||
|
||||
The labs have been configured with the following networking defaults. If you change any of these after you have deployed any of the lab, you'll need to completely reset it and start again from the beginning:
|
||||
|
||||
```bash
|
||||
vagrant destroy -f
|
||||
vagrant up
|
||||
```
|
||||
|
||||
If you do change any of these, **please consider that a personal preference and don't submit a PR for it**.
|
||||
|
||||
### Virtual Machine Network
|
||||
|
||||
The network used by the VirtualBox virtual machines is `192.168.56.0/24`.
|
||||
|
||||
To change this, edit the [Vagrantfile](../vagrant/Vagrantfile) in your cloned copy (do not edit directly in github), and set the new value for the network prefix at line 9. This should not overlap any of the other network settings.
|
||||
|
||||
Note that you do not need to edit any of the other scripts to make the above change. It is all managed by shell variable computations based on the assigned VM IP addresses and the values in the hosts file (also computed).
|
||||
|
||||
It is *recommended* that you leave the pod and service networks with the following defaults. If you change them then you will also need to edit one or both of the CoreDNS and Weave networking manifests to accommodate your change.
|
||||
|
||||
### Pod Network
|
||||
|
||||
The network used to assign IP addresses to pods is `10.244.0.0/16`.
|
||||
|
||||
To change this, open all the `.md` files in the [docs](../docs/) directory in your favourite IDE and do a global replace on<br>
|
||||
`POD_CIDR=10.244.0.0/16`<br>
|
||||
with the new CDIR range. This should not overlap any of the other network settings.
|
||||
|
||||
### Service Network
|
||||
|
||||
The network used to assign IP addresses to Cluster IP services is `10.96.0.0/16`.
|
||||
|
||||
To change this, open all the `.md` files in the [docs](../docs/) directory in your favourite IDE and do a global replace on<br>
|
||||
`SERVICE_CIDR=10.96.0.0/16`<br>
|
||||
with the new CDIR range. This should not overlap any of the other network settings.
|
||||
|
||||
Additionally edit line 164 of [coredns.yaml](../deployments/coredns.yaml) to set the new DNS service address (should still end with `.10`)
|
||||
|
||||
## Running Commands in Parallel with tmux
|
||||
|
||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with synchronize-panes enabled to speed up the provisioning process.
|
||||
|
||||
> The use of tmux is optional and not required to complete this tutorial.
|
||||
|
||||

|
||||
|
||||
> Enable synchronize-panes by pressing `CTRL+B` followed by `"` to split the window into two panes. In each pane (selectable with mouse), ssh to the host(s) you will be working with.</br>Next type `CTRL+X` at the prompt to begin sync. In sync mode, the dividing line between panes will be red. Everything you type or paste in one pane will be echoed in the other.<br>To disable synchronization type `CTRL+X` again.</br></br>Note that the `CTRL-X` key binding is provided by a `.tmux.conf` loaded onto the VM by the vagrant provisioner.
|
||||
|
||||
Next: [Compute Resources](02-compute-resources.md)
|
||||
@@ -1,141 +0,0 @@
|
||||
# Provisioning Compute Resources
|
||||
|
||||
Note: You must have VirtualBox and Vagrant configured at this point.
|
||||
|
||||
Download this github repository and cd into the vagrant folder:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/mmumshad/kubernetes-the-hard-way.git
|
||||
```
|
||||
|
||||
CD into vagrant directory:
|
||||
|
||||
```bash
|
||||
cd kubernetes-the-hard-way/vagrant
|
||||
```
|
||||
|
||||
The `Vagrantfile` is configured to assume you have at least an 8 core CPU which most modern core i5, i7 and i9 do, and at least 16GB RAM. You can tune these values expecially if you have *less* than this by editing the `Vagrantfile` before the next step below and adjusting the values for `RAM_SIZE` and `CPU_CORES` accordingly.
|
||||
|
||||
This will not work if you have less than 8GB of RAM.
|
||||
|
||||
Run Vagrant up:
|
||||
|
||||
```bash
|
||||
vagrant up
|
||||
```
|
||||
|
||||
|
||||
This does the below:
|
||||
|
||||
- Deploys 5 VMs - 2 Master, 2 Worker and 1 Loadbalancer with the name 'kubernetes-ha-* '
|
||||
> This is the default settings. This can be changed at the top of the Vagrant file.
|
||||
> If you choose to change these settings, please also update `vagrant/ubuntu/vagrant/setup-hosts.sh`
|
||||
> to add the additional hosts to the `/etc/hosts` default before running `vagrant up`.
|
||||
|
||||
- Set's IP addresses in the range `192.168.56.x`
|
||||
|
||||
| VM | VM Name | Purpose | IP | Forwarded Port | RAM |
|
||||
| ------------ | ---------------------- |:-------------:| -------------:| ----------------:|-----:|
|
||||
| master-1 | kubernetes-ha-master-1 | Master | 192.168.56.11 | 2711 | 2048 |
|
||||
| master-2 | kubernetes-ha-master-2 | Master | 192.168.56.12 | 2712 | 1024 |
|
||||
| worker-1 | kubernetes-ha-worker-1 | Worker | 192.168.56.21 | 2721 | 512 |
|
||||
| worker-2 | kubernetes-ha-worker-2 | Worker | 192.168.56.22 | 2722 | 1024 |
|
||||
| loadbalancer | kubernetes-ha-lb | LoadBalancer | 192.168.56.30 | 2730 | 1024 |
|
||||
|
||||
> These are the default settings. These can be changed in the Vagrant file
|
||||
|
||||
- Add's a DNS entry to each of the nodes to access internet
|
||||
> DNS: 8.8.8.8
|
||||
|
||||
- Sets required kernel settings for kubernetes networking to function correctly.
|
||||
|
||||
See [Vagrant page](../vagrant/README.md) for details.
|
||||
|
||||
## SSH to the nodes
|
||||
|
||||
There are two ways to SSH into the nodes:
|
||||
|
||||
### 1. SSH using Vagrant
|
||||
|
||||
From the directory you ran the `vagrant up` command, run `vagrant ssh \<vm\>` for example `vagrant ssh master-1`.
|
||||
> Note: Use VM field from the above table and not the VM name itself.
|
||||
|
||||
### 2. SSH Using SSH Client Tools
|
||||
|
||||
Use your favourite SSH terminal tool (putty).
|
||||
|
||||
Use the above IP addresses. Username and password-based SSH is disabled by default.
|
||||
|
||||
Vagrant generates a private key for each of these VMs. It is placed under the `.vagrant` folder (in the directory you ran the `vagrant up` command from) at the below path for each VM:
|
||||
|
||||
- **Private key path**: `.vagrant/machines/\<machine name\>/virtualbox/private_key`
|
||||
- **Username/password**: `vagrant/vagrant`
|
||||
|
||||
|
||||
## Verify Environment
|
||||
|
||||
- Ensure all VMs are up.
|
||||
- Ensure VMs are assigned the above IP addresses.
|
||||
- Ensure you can SSH into these VMs using the IP and private keys, or `vagrant ssh`.
|
||||
- Ensure the VMs can ping each other.
|
||||
|
||||
## Troubleshooting Tips
|
||||
|
||||
### Failed Provisioning
|
||||
|
||||
If any of the VMs failed to provision, or is not configured correct, delete the VM using the command:
|
||||
|
||||
```bash
|
||||
vagrant destroy \<vm\>
|
||||
```
|
||||
|
||||
Then re-provision. Only the missing VMs will be re-provisioned
|
||||
|
||||
```bash
|
||||
vagrant up
|
||||
```
|
||||
|
||||
|
||||
Sometimes the delete does not delete the folder created for the VM and throws an error similar to this:
|
||||
|
||||
VirtualBox error:
|
||||
|
||||
VBoxManage.exe: error: Could not rename the directory 'D:\VirtualBox VMs\ubuntu-bionic-18.04-cloudimg-20190122_1552891552601_76806' to 'D:\VirtualBox VMs\kubernetes-ha-worker-2' to save the settings file (VERR_ALREADY_EXISTS)
|
||||
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component SessionMachine, interface IMachine, callee IUnknown
|
||||
VBoxManage.exe: error: Context: "SaveSettings()" at line 3105 of file VBoxManageModifyVM.cpp
|
||||
|
||||
In such cases delete the VM, then delete the VM folder and then re-provision, e.g.
|
||||
|
||||
```bash
|
||||
vagrant destroy worker-2
|
||||
rmdir "\<path-to-vm-folder\>\kubernetes-ha-worker-2
|
||||
vagrant up
|
||||
```
|
||||
|
||||
### Provisioner gets stuck
|
||||
|
||||
This will most likely happen at "Waiting for machine to reboot"
|
||||
|
||||
1. Hit `CTRL+C`
|
||||
1. Kill any running `ruby` process, or Vagrant will complain.
|
||||
1. Destroy the VM that got stuck: `vagrant destroy \<vm\>`
|
||||
1. Re-provision. It will pick up where it left off: `vagrant up`
|
||||
|
||||
# Pausing the Environment
|
||||
|
||||
You do not need to complete the entire lab in one session. You may shut down and resume the environment as follows, if you need to power off your computer.
|
||||
|
||||
To shut down. This will gracefully shut down all the VMs in the reverse order to which they were started:
|
||||
|
||||
```bash
|
||||
vagrant halt
|
||||
```
|
||||
|
||||
To power on again:
|
||||
|
||||
```bash
|
||||
vagrant up
|
||||
```
|
||||
|
||||
Prev: [Prerequisites](01-prerequisites.md)<br>
|
||||
Next: [Client tools](03-client-tools.md)
|
||||
@@ -1,16 +1,16 @@
|
||||
# Installing the Client Tools
|
||||
|
||||
First identify a system from where you will perform administrative tasks, such as creating certificates, `kubeconfig` files and distributing them to the different VMs.
|
||||
From this point on, the steps are *exactly* the same for VirtualBox and Apple Silicon as it is now about configuring Kubernetes itself on the Linux hosts which you have now provisioned.
|
||||
|
||||
If you are on a Linux laptop, then your laptop could be this system. In my case I chose the `master-1` node to perform administrative tasks. Whichever system you chose make sure that system is able to access all the provisioned VMs through SSH to copy files over.
|
||||
Begin by logging into `controlplane01` using `vagrant ssh` for VirtualBox, or `multipass shell` for Apple Silicon.
|
||||
|
||||
## Access all VMs
|
||||
|
||||
Here we create an SSH key pair for the `vagrant` user who we are logged in as. We will copy the public key of this pair to the other master and both workers to permit us to use password-less SSH (and SCP) go get from `master-1` to these other nodes in the context of the `vagrant` user which exists on all nodes.
|
||||
Here we create an SSH key pair for the user who we are logged in as (this is `vagrant` on VirtualBox, `ubuntu` on Apple Silicon). We will copy the public key of this pair to the other controlplane and both workers to permit us to use password-less SSH (and SCP) go get from `controlplane01` to these other nodes in the context of the user which exists on all nodes.
|
||||
|
||||
Generate SSH key pair on `master-1` node:
|
||||
Generate SSH key pair on `controlplane01` node:
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
```bash
|
||||
ssh-keygen
|
||||
@@ -18,32 +18,52 @@ ssh-keygen
|
||||
|
||||
Leave all settings to default by pressing `ENTER` at any prompt.
|
||||
|
||||
Add this key to the local `authorized_keys` (`master-1`) as in some commands we `scp` to ourself.
|
||||
Add this key to the local `authorized_keys` (`controlplane01`) as in some commands we `scp` to ourself.
|
||||
|
||||
```bash
|
||||
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
|
||||
```
|
||||
|
||||
Copy the key to the other hosts. For this step please enter `vagrant` where a password is requested.
|
||||
Copy the key to the other hosts. You will be asked to enter a password for each of the `ssh-copy-id` commands. The password is:
|
||||
* VirtualBox - `vagrant`
|
||||
* Apple Silicon: `ubuntu`
|
||||
|
||||
The option `-o StrictHostKeyChecking=no` tells it not to ask if you want to connect to a previously unknown host. Not best practice in the real world, but speeds things up here.
|
||||
|
||||
`$(whoami)` selects the appropriate user name to connect to the remote VMs. On VirtualBox this evaluates to `vagrant`; on Apple Silicon it is `ubuntu`.
|
||||
|
||||
```bash
|
||||
ssh-copy-id -o StrictHostKeyChecking=no vagrant@master-2
|
||||
ssh-copy-id -o StrictHostKeyChecking=no vagrant@loadbalancer
|
||||
ssh-copy-id -o StrictHostKeyChecking=no vagrant@worker-1
|
||||
ssh-copy-id -o StrictHostKeyChecking=no vagrant@worker-2
|
||||
ssh-copy-id -o StrictHostKeyChecking=no $(whoami)@controlplane02
|
||||
ssh-copy-id -o StrictHostKeyChecking=no $(whoami)@loadbalancer
|
||||
ssh-copy-id -o StrictHostKeyChecking=no $(whoami)@node01
|
||||
ssh-copy-id -o StrictHostKeyChecking=no $(whoami)@node02
|
||||
```
|
||||
|
||||
|
||||
|
||||
For each host, the output should be similar to this. If it is not, then you may have entered an incorrect password. Retry the step.
|
||||
|
||||
```
|
||||
Number of key(s) added: 1
|
||||
|
||||
Now try logging into the machine, with: "ssh 'vagrant@master-2'"
|
||||
and check to make sure that only the key(s) you wanted were added.
|
||||
```
|
||||
|
||||
Verify connection
|
||||
|
||||
```
|
||||
ssh controlplane01
|
||||
exit
|
||||
|
||||
ssh controlplane02
|
||||
exit
|
||||
|
||||
ssh node01
|
||||
exit
|
||||
|
||||
ssh node02
|
||||
exit
|
||||
```
|
||||
|
||||
|
||||
## Install kubectl
|
||||
|
||||
The [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl) command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
|
||||
@@ -52,10 +72,12 @@ Reference: [https://kubernetes.io/docs/tasks/tools/install-kubectl/](https://kub
|
||||
|
||||
We will be using `kubectl` early on to generate `kubeconfig` files for the controlplane components.
|
||||
|
||||
The environment variable `ARCH` is pre-set during VM deployment according to whether using VirtualBox (`amd64`) or Apple Silicon (`arm64`) to ensure the correct version of this and later software is downloaded for your machine architecture.
|
||||
|
||||
### Linux
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/${ARCH}/kubectl"
|
||||
chmod +x kubectl
|
||||
sudo mv kubectl /usr/local/bin/
|
||||
```
|
||||
@@ -65,29 +87,15 @@ sudo mv kubectl /usr/local/bin/
|
||||
Verify `kubectl` is installed:
|
||||
|
||||
```
|
||||
kubectl version -o yaml
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
output will be similar to this, although versions may be newer:
|
||||
|
||||
```
|
||||
kubectl version -o yaml
|
||||
clientVersion:
|
||||
buildDate: "2023-11-15T16:58:22Z"
|
||||
compiler: gc
|
||||
gitCommit: bae2c62678db2b5053817bc97181fcc2e8388103
|
||||
gitTreeState: clean
|
||||
gitVersion: v1.28.4
|
||||
goVersion: go1.20.11
|
||||
major: "1"
|
||||
minor: "28"
|
||||
platform: linux/amd64
|
||||
kustomizeVersion: v5.0.4-0.20230601165947-6ce0bf390ce3
|
||||
|
||||
The connection to the server localhost:8080 was refused - did you specify the right host or port?
|
||||
Client Version: v1.29.0
|
||||
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
|
||||
```
|
||||
|
||||
Don't worry about the error at the end as it is expected. We have not set anything up yet!
|
||||
|
||||
Prev: [Compute Resources](02-compute-resources.md)<br>
|
||||
Next: [Certificate Authority](04-certificate-authority.md)
|
||||
Next: [Certificate Authority](04-certificate-authority.md)<br>
|
||||
Prev: Compute Resources ([VirtualBox](../VirtualBox/docs/02-compute-resources.md)), ([Apple Silicon](../apple-silicon/docs/02-compute-resources.md))
|
||||
@@ -4,23 +4,23 @@ In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/w
|
||||
|
||||
# Where to do these?
|
||||
|
||||
You can do these on any machine with `openssl` on it. But you should be able to copy the generated files to the provisioned VMs. Or just do these from one of the master nodes.
|
||||
You can do these on any machine with `openssl` on it. But you should be able to copy the generated files to the provisioned VMs. Or just do these from one of the controlplane nodes.
|
||||
|
||||
In our case we do the following steps on the `master-1` node, as we have set it up to be the administrative client.
|
||||
In our case we do the following steps on the `controlplane01` node, as we have set it up to be the administrative client.
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
## Certificate Authority
|
||||
|
||||
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
|
||||
|
||||
Query IPs of hosts we will insert as certificate subject alternative names (SANs), which will be read from `/etc/hosts`. Note that doing this allows us to change the VM network range more easily from the default for these labs which is `192.168.56.0/24`
|
||||
Query IPs of hosts we will insert as certificate subject alternative names (SANs), which will be read from `/etc/hosts`.
|
||||
|
||||
Set up environment variables. Run the following:
|
||||
|
||||
```bash
|
||||
MASTER_1=$(dig +short master-1)
|
||||
MASTER_2=$(dig +short master-2)
|
||||
CONTROL01=$(dig +short controlplane01)
|
||||
CONTROL02=$(dig +short controlplane02)
|
||||
LOADBALANCER=$(dig +short loadbalancer)
|
||||
```
|
||||
|
||||
@@ -34,14 +34,14 @@ API_SERVICE=$(echo $SERVICE_CIDR | awk 'BEGIN {FS="."} ; { printf("%s.%s.%s.1",
|
||||
Check that the environment variables are set. Run the following:
|
||||
|
||||
```bash
|
||||
echo $MASTER_1
|
||||
echo $MASTER_2
|
||||
echo $CONTROL01
|
||||
echo $CONTROL02
|
||||
echo $LOADBALANCER
|
||||
echo $SERVICE_CIDR
|
||||
echo $API_SERVICE
|
||||
```
|
||||
|
||||
The output should look like this. If you changed any of the defaults mentioned in the [prerequisites](./01-prerequisites.md) page, then addresses may differ.
|
||||
The output should look like this with one IP address per line. If you changed any of the defaults mentioned in the [prerequisites](./01-prerequisites.md) page, then addresses may differ. The first 3 addresses will also be different for Apple Silicon on Multipass (likely 192.168.64.x).
|
||||
|
||||
```
|
||||
192.168.56.11
|
||||
@@ -51,7 +51,7 @@ The output should look like this. If you changed any of the defaults mentioned i
|
||||
10.96.0.1
|
||||
```
|
||||
|
||||
Create a CA certificate, then generate a Certificate Signing Request and use it to create a private key:
|
||||
Create a CA certificate by first creating a private key, then using it to create a certificate signing request, then self-signing the new certificate with our key.
|
||||
|
||||
```bash
|
||||
{
|
||||
@@ -78,12 +78,14 @@ Reference : https://kubernetes.io/docs/tasks/administer-cluster/certificates/#op
|
||||
The `ca.crt` is the Kubernetes Certificate Authority certificate and `ca.key` is the Kubernetes Certificate Authority private key.
|
||||
You will use the `ca.crt` file in many places, so it will be copied to many places.
|
||||
|
||||
The `ca.key` is used by the CA for signing certificates. And it should be securely stored. In this case our master node(s) is our CA server as well, so we will store it on master node(s). There is no need to copy this file elsewhere.
|
||||
The `ca.key` is used by the CA for signing certificates. And it should be securely stored. In this case our controlplane node(s) is our CA server as well, so we will store it on controlplane node(s). There is no need to copy this file elsewhere.
|
||||
|
||||
## Client and Server Certificates
|
||||
|
||||
In this section you will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes `admin` user.
|
||||
|
||||
To better understand the role of client certificates with respect to users and groups, see [this informative video](https://youtu.be/I-iVrIWfMl8). Note that all the kubenetes services below are themselves cluster users.
|
||||
|
||||
### The Admin Client Certificate
|
||||
|
||||
Generate the `admin` client certificate and private key:
|
||||
@@ -191,7 +193,7 @@ kube-scheduler.crt
|
||||
|
||||
### The Kubernetes API Server Certificate
|
||||
|
||||
The kube-apiserver certificate requires all names that various components may reach it to be part of the alternate names. These include the different DNS names, and IP addresses such as the master servers IP address, the load balancers IP address, the kube-api service IP address etc.
|
||||
The kube-apiserver certificate requires all names that various components may reach it to be part of the alternate names. These include the different DNS names, and IP addresses such as the controlplane servers IP address, the load balancers IP address, the kube-api service IP address etc. These provide an *identity* for the certificate, which is key in the SSL process for a server to prove who it is.
|
||||
|
||||
The `openssl` command cannot take alternate names as command line parameter. So we must create a `conf` file for it:
|
||||
|
||||
@@ -213,8 +215,8 @@ DNS.3 = kubernetes.default.svc
|
||||
DNS.4 = kubernetes.default.svc.cluster
|
||||
DNS.5 = kubernetes.default.svc.cluster.local
|
||||
IP.1 = ${API_SERVICE}
|
||||
IP.2 = ${MASTER_1}
|
||||
IP.3 = ${MASTER_2}
|
||||
IP.2 = ${CONTROL01}
|
||||
IP.3 = ${CONTROL02}
|
||||
IP.4 = ${LOADBALANCER}
|
||||
IP.5 = 127.0.0.1
|
||||
EOF
|
||||
@@ -241,7 +243,7 @@ kube-apiserver.crt
|
||||
kube-apiserver.key
|
||||
```
|
||||
|
||||
# The Kubelet Client Certificate
|
||||
### The API Server Kubelet Client Certificate
|
||||
|
||||
This certificate is for the API server to authenticate with the kubelets when it requests information from them
|
||||
|
||||
@@ -282,7 +284,7 @@ apiserver-kubelet-client.key
|
||||
|
||||
### The ETCD Server Certificate
|
||||
|
||||
Similarly ETCD server certificate must have addresses of all the servers part of the ETCD cluster
|
||||
Similarly ETCD server certificate must have addresses of all the servers part of the ETCD cluster. Similarly, this is a server certificate, which is again all about proving identity.
|
||||
|
||||
The `openssl` command cannot take alternate names as command line parameter. So we must create a `conf` file for it:
|
||||
|
||||
@@ -297,8 +299,8 @@ basicConstraints = CA:FALSE
|
||||
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
IP.1 = ${MASTER_1}
|
||||
IP.2 = ${MASTER_2}
|
||||
IP.1 = ${CONTROL01}
|
||||
IP.2 = ${CONTROL02}
|
||||
IP.3 = 127.0.0.1
|
||||
EOF
|
||||
```
|
||||
@@ -326,7 +328,7 @@ etcd-server.crt
|
||||
|
||||
## The Service Account Key Pair
|
||||
|
||||
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as describe in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
|
||||
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/) documentation.
|
||||
|
||||
Generate the `service-account` certificate and private key:
|
||||
|
||||
@@ -355,7 +357,7 @@ Run the following, and select option 1 to check all required certificates were g
|
||||
|
||||
[//]: # (command:./cert_verify.sh 1)
|
||||
|
||||
```bash
|
||||
```
|
||||
./cert_verify.sh
|
||||
```
|
||||
|
||||
@@ -373,7 +375,7 @@ Copy the appropriate certificates and private keys to each instance:
|
||||
|
||||
```bash
|
||||
{
|
||||
for instance in master-1 master-2; do
|
||||
for instance in controlplane01 controlplane02; do
|
||||
scp -o StrictHostKeyChecking=no ca.crt ca.key kube-apiserver.key kube-apiserver.crt \
|
||||
apiserver-kubelet-client.crt apiserver-kubelet-client.key \
|
||||
service-account.key service-account.crt \
|
||||
@@ -383,21 +385,21 @@ for instance in master-1 master-2; do
|
||||
${instance}:~/
|
||||
done
|
||||
|
||||
for instance in worker-1 worker-2 ; do
|
||||
for instance in node01 node02 ; do
|
||||
scp ca.crt kube-proxy.crt kube-proxy.key ${instance}:~/
|
||||
done
|
||||
}
|
||||
```
|
||||
|
||||
## Optional - Check Certificates on master-2
|
||||
## Optional - Check Certificates on controlplane02
|
||||
|
||||
At `master-2` node run the following, selecting option 1
|
||||
At `controlplane02` node run the following, selecting option 1
|
||||
|
||||
[//]: # (commandssh master-2 './cert_verify.sh 1')
|
||||
[//]: # (commandssh controlplane02 './cert_verify.sh 1')
|
||||
|
||||
```
|
||||
./cert_verify.sh
|
||||
```
|
||||
|
||||
Prev: [Client tools](03-client-tools.md)<br>
|
||||
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)
|
||||
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)<br>
|
||||
Prev: [Client tools](03-client-tools.md)
|
||||
|
||||
@@ -14,7 +14,7 @@ In this section you will generate kubeconfig files for the `controller manager`,
|
||||
|
||||
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the load balancer will be used, so let's first get the address of the loadbalancer into a shell variable such that we can use it in the kubeconfigs for services that run on worker nodes. The controller manager and scheduler need to talk to the local API server, hence they use the localhost address.
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
```bash
|
||||
LOADBALANCER=$(dig +short loadbalancer)
|
||||
@@ -161,7 +161,7 @@ Reference docs for kubeconfig [here](https://kubernetes.io/docs/tasks/access-app
|
||||
Copy the appropriate `kube-proxy` kubeconfig files to each worker instance:
|
||||
|
||||
```bash
|
||||
for instance in worker-1 worker-2; do
|
||||
for instance in node01 node02; do
|
||||
scp kube-proxy.kubeconfig ${instance}:~/
|
||||
done
|
||||
```
|
||||
@@ -169,22 +169,22 @@ done
|
||||
Copy the appropriate `admin.kubeconfig`, `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
|
||||
|
||||
```bash
|
||||
for instance in master-1 master-2; do
|
||||
for instance in controlplane01 controlplane02; do
|
||||
scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
|
||||
done
|
||||
```
|
||||
|
||||
## Optional - Check kubeconfigs
|
||||
|
||||
At `master-1` and `master-2` nodes, run the following, selecting option 2
|
||||
At `controlplane01` and `controlplane02` nodes, run the following, selecting option 2
|
||||
|
||||
[//]: # (command./cert_verify.sh 2)
|
||||
[//]: # (command:ssh master-2 './cert_verify.sh 2')
|
||||
[//]: # (command:ssh controlplane02 './cert_verify.sh 2')
|
||||
|
||||
```
|
||||
./cert_verify.sh
|
||||
```
|
||||
|
||||
|
||||
Prev: [Certificate Authority](04-certificate-authority.md)<br>
|
||||
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
|
||||
Next: [Generating the Data Encryption Config and Key](./06-data-encryption-keys.md)<br>
|
||||
Prev: [Certificate Authority](./04-certificate-authority.md)
|
||||
|
||||
@@ -6,9 +6,9 @@ In this lab you will generate an encryption key and an [encryption config](https
|
||||
|
||||
## The Encryption Key
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
Generate an encryption key:
|
||||
Generate an encryption key. This is simply 32 bytes of random data, which we base64 encode:
|
||||
|
||||
```bash
|
||||
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
|
||||
@@ -37,7 +37,7 @@ EOF
|
||||
Copy the `encryption-config.yaml` encryption config file to each controller instance:
|
||||
|
||||
```bash
|
||||
for instance in master-1 master-2; do
|
||||
for instance in controlplane01 controlplane02; do
|
||||
scp encryption-config.yaml ${instance}:~/
|
||||
done
|
||||
```
|
||||
@@ -45,7 +45,7 @@ done
|
||||
Move `encryption-config.yaml` encryption config file to appropriate directory.
|
||||
|
||||
```bash
|
||||
for instance in master-1 master-2; do
|
||||
for instance in controlplane01 controlplane02; do
|
||||
ssh ${instance} sudo mkdir -p /var/lib/kubernetes/
|
||||
ssh ${instance} sudo mv encryption-config.yaml /var/lib/kubernetes/
|
||||
done
|
||||
@@ -53,5 +53,5 @@ done
|
||||
|
||||
Reference: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data
|
||||
|
||||
Prev: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)<br>
|
||||
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
|
||||
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)<br>
|
||||
Prev: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)
|
||||
|
||||
@@ -2,9 +2,11 @@
|
||||
|
||||
Kubernetes components are stateless and store cluster state in [etcd](https://etcd.io/). In this lab you will bootstrap a two node etcd cluster and configure it for high availability and secure remote access.
|
||||
|
||||
If you examine the command line arguments passed to etcd in its unit file, you should recognise some of the certificates and keys created in earlier sections of this course.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The commands in this lab must be run on each controller instance: `master-1`, and `master-2`. Login to each of these using an SSH terminal.
|
||||
The commands in this lab must be run on each controller instance: `controlplane01`, and `controlplane02`. Login to each of these using an SSH terminal.
|
||||
|
||||
### Running commands in parallel with tmux
|
||||
|
||||
@@ -16,21 +18,21 @@ The commands in this lab must be run on each controller instance: `master-1`, an
|
||||
|
||||
Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project:
|
||||
|
||||
[//]: # (host:master-1-master2)
|
||||
[//]: # (host:controlplane01-controlplane02)
|
||||
|
||||
|
||||
```bash
|
||||
ETCD_VERSION="v3.5.9"
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
"https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz"
|
||||
"https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-${ARCH}.tar.gz"
|
||||
```
|
||||
|
||||
Extract and install the `etcd` server and the `etcdctl` command line utility:
|
||||
|
||||
```bash
|
||||
{
|
||||
tar -xvf etcd-${ETCD_VERSION}-linux-amd64.tar.gz
|
||||
sudo mv etcd-${ETCD_VERSION}-linux-amd64/etcd* /usr/local/bin/
|
||||
tar -xvf etcd-${ETCD_VERSION}-linux-${ARCH}.tar.gz
|
||||
sudo mv etcd-${ETCD_VERSION}-linux-${ARCH}/etcd* /usr/local/bin/
|
||||
}
|
||||
```
|
||||
|
||||
@@ -52,12 +54,11 @@ Copy and secure certificates. Note that we place `ca.crt` in our main PKI direct
|
||||
```
|
||||
|
||||
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers.<br>
|
||||
Retrieve the internal IP address of the master(etcd) nodes, and also that of master-1 and master-2 for the etcd cluster member list
|
||||
Retrieve the internal IP address of the controlplane(etcd) nodes, and also that of controlplane01 and controlplane02 for the etcd cluster member list
|
||||
|
||||
```bash
|
||||
INTERNAL_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)
|
||||
MASTER_1=$(dig +short master-1)
|
||||
MASTER_2=$(dig +short master-2)
|
||||
CONTROL01=$(dig +short controlplane01)
|
||||
CONTROL02=$(dig +short controlplane02)
|
||||
```
|
||||
|
||||
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
|
||||
@@ -85,12 +86,12 @@ ExecStart=/usr/local/bin/etcd \\
|
||||
--peer-trusted-ca-file=/etc/etcd/ca.crt \\
|
||||
--peer-client-cert-auth \\
|
||||
--client-cert-auth \\
|
||||
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
|
||||
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
|
||||
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
|
||||
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
|
||||
--initial-advertise-peer-urls https://${PRIMARY_IP}:2380 \\
|
||||
--listen-peer-urls https://${PRIMARY_IP}:2380 \\
|
||||
--listen-client-urls https://${PRIMARY_IP}:2379,https://127.0.0.1:2379 \\
|
||||
--advertise-client-urls https://${PRIMARY_IP}:2379 \\
|
||||
--initial-cluster-token etcd-cluster-0 \\
|
||||
--initial-cluster master-1=https://${MASTER_1}:2380,master-2=https://${MASTER_2}:2380 \\
|
||||
--initial-cluster controlplane01=https://${CONTROL01}:2380,controlplane02=https://${CONTROL02}:2380 \\
|
||||
--initial-cluster-state new \\
|
||||
--data-dir=/var/lib/etcd
|
||||
Restart=on-failure
|
||||
@@ -111,13 +112,15 @@ EOF
|
||||
}
|
||||
```
|
||||
|
||||
> Remember to run the above commands on each controller node: `master-1`, and `master-2`.
|
||||
> Remember to run the above commands on each controller node: `controlplane01`, and `controlplane02`.
|
||||
|
||||
## Verification
|
||||
|
||||
[//]: # (sleep:5)
|
||||
|
||||
List the etcd cluster members:
|
||||
List the etcd cluster members.
|
||||
|
||||
After running the abovre commands on both controlplane nodes, run the following on either or both of `controlplane01` and `controlplane02`
|
||||
|
||||
```bash
|
||||
sudo ETCDCTL_API=3 etcdctl member list \
|
||||
@@ -127,14 +130,14 @@ sudo ETCDCTL_API=3 etcdctl member list \
|
||||
--key=/etc/etcd/etcd-server.key
|
||||
```
|
||||
|
||||
> output
|
||||
Output will be similar to this
|
||||
|
||||
```
|
||||
45bf9ccad8d8900a, started, master-2, https://192.168.56.12:2380, https://192.168.56.12:2379
|
||||
54a5796a6803f252, started, master-1, https://192.168.56.11:2380, https://192.168.56.11:2379
|
||||
45bf9ccad8d8900a, started, controlplane02, https://192.168.56.12:2380, https://192.168.56.12:2379
|
||||
54a5796a6803f252, started, controlplane01, https://192.168.56.11:2380, https://192.168.56.11:2379
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#starting-etcd-clusters
|
||||
|
||||
Prev: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)]<br>
|
||||
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)
|
||||
Next: [Bootstrapping the Kubernetes Control Plane](./08-bootstrapping-kubernetes-controllers.md)<br>
|
||||
Prev: [Generating the Data Encryption Config and Key](./06-data-encryption-keys.md)
|
||||
|
||||
@@ -2,17 +2,20 @@
|
||||
|
||||
In this lab you will bootstrap the Kubernetes control plane across 2 compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
|
||||
|
||||
Note that in a production-ready cluster it is recommended to have an odd number of master nodes as for multi-node services like etcd, leader election and quorum work better. See lecture on this ([KodeKloud](https://kodekloud.com/topic/etcd-in-ha/), [Udemy](https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/learn/lecture/14296192#overview)). We're only using two here to save on RAM on your workstation.
|
||||
Note that in a production-ready cluster it is recommended to have an odd number of controlplane nodes as for multi-node services like etcd, leader election and quorum work better. See lecture on this ([KodeKloud](https://kodekloud.com/topic/etcd-in-ha/), [Udemy](https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/learn/lecture/14296192#overview)). We're only using two here to save on RAM on your workstation.
|
||||
|
||||
|
||||
If you examine the command line arguments passed to the various control plane components, you should recognise many of the files that were created in earlier sections of this course, such as certificates, keys, kubeconfigs, the encryption configuration etc.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The commands in this lab up as far as the load balancer configuration must be run on each controller instance: `master-1`, and `master-2`. Login to each controller instance using SSH Terminal.
|
||||
The commands in this lab up as far as the load balancer configuration must be run on each controller instance: `controlplane01`, and `controlplane02`. Login to each controller instance using SSH Terminal.
|
||||
|
||||
You can perform this step with [tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux).
|
||||
|
||||
## Provision the Kubernetes Control Plane
|
||||
|
||||
[//]: # (host:master-1-master2)
|
||||
[//]: # (host:controlplane01-controlplane02)
|
||||
|
||||
### Download and Install the Kubernetes Controller Binaries
|
||||
|
||||
@@ -22,10 +25,10 @@ Download the latest official Kubernetes release binaries:
|
||||
KUBE_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
|
||||
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-apiserver" \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-controller-manager" \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-scheduler" \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kubectl"
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kube-apiserver" \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kube-controller-manager" \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kube-scheduler" \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kubectl"
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/releases/download/#binaries
|
||||
@@ -62,15 +65,14 @@ The instance internal IP address will be used to advertise the API Server to mem
|
||||
Retrieve these internal IP addresses:
|
||||
|
||||
```bash
|
||||
INTERNAL_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)
|
||||
LOADBALANCER=$(dig +short loadbalancer)
|
||||
```
|
||||
|
||||
IP addresses of the two master nodes, where the etcd servers are.
|
||||
IP addresses of the two controlplane nodes, where the etcd servers are.
|
||||
|
||||
```bash
|
||||
MASTER_1=$(dig +short master-1)
|
||||
MASTER_2=$(dig +short master-2)
|
||||
CONTROL01=$(dig +short controlplane01)
|
||||
CONTROL02=$(dig +short controlplane02)
|
||||
```
|
||||
|
||||
CIDR ranges used *within* the cluster
|
||||
@@ -90,7 +92,7 @@ Documentation=https://github.com/kubernetes/kubernetes
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/kube-apiserver \\
|
||||
--advertise-address=${INTERNAL_IP} \\
|
||||
--advertise-address=${PRIMARY_IP} \\
|
||||
--allow-privileged=true \\
|
||||
--apiserver-count=2 \\
|
||||
--audit-log-maxage=30 \\
|
||||
@@ -105,7 +107,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\
|
||||
--etcd-cafile=/var/lib/kubernetes/pki/ca.crt \\
|
||||
--etcd-certfile=/var/lib/kubernetes/pki/etcd-server.crt \\
|
||||
--etcd-keyfile=/var/lib/kubernetes/pki/etcd-server.key \\
|
||||
--etcd-servers=https://${MASTER_1}:2379,https://${MASTER_2}:2379 \\
|
||||
--etcd-servers=https://${CONTROL01}:2379,https://${CONTROL02}:2379 \\
|
||||
--event-ttl=1h \\
|
||||
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
|
||||
--kubelet-certificate-authority=/var/lib/kubernetes/pki/ca.crt \\
|
||||
@@ -210,7 +212,7 @@ sudo chmod 600 /var/lib/kubernetes/*.kubeconfig
|
||||
|
||||
## Optional - Check Certificates and kubeconfigs
|
||||
|
||||
At `master-1` and `master-2` nodes, run the following, selecting option 3
|
||||
At `controlplane01` and `controlplane02` nodes, run the following, selecting option 3
|
||||
|
||||
[//]: # (command:./cert_verify.sh 3)
|
||||
|
||||
@@ -236,6 +238,8 @@ At `master-1` and `master-2` nodes, run the following, selecting option 3
|
||||
|
||||
[//]: # (sleep:10)
|
||||
|
||||
After running the abovre commands on both controlplane nodes, run the following on `controlplane01`
|
||||
|
||||
```bash
|
||||
kubectl get componentstatuses --kubeconfig admin.kubeconfig
|
||||
```
|
||||
@@ -253,7 +257,7 @@ etcd-0 Healthy {"health": "true"}
|
||||
etcd-1 Healthy {"health": "true"}
|
||||
```
|
||||
|
||||
> Remember to run the above commands on each controller node: `master-1`, and `master-2`.
|
||||
> Remember to run the above commands on each controller node: `controlplane01`, and `controlplane02`.
|
||||
|
||||
## The Kubernetes Frontend Load Balancer
|
||||
|
||||
@@ -264,7 +268,7 @@ In this section you will provision an external load balancer to front the Kubern
|
||||
|
||||
A NLB operates at [layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_layer) (TCP) meaning it passes the traffic straight through to the back end servers unfettered and does not interfere with the TLS process, leaving this to the Kube API servers.
|
||||
|
||||
Login to `loadbalancer` instance using SSH Terminal.
|
||||
Login to `loadbalancer` instance using `vagrant ssh` (or `multipass shell` on Apple Silicon).
|
||||
|
||||
[//]: # (host:loadbalancer)
|
||||
|
||||
@@ -273,15 +277,17 @@ Login to `loadbalancer` instance using SSH Terminal.
|
||||
sudo apt-get update && sudo apt-get install -y haproxy
|
||||
```
|
||||
|
||||
Read IP addresses of master nodes and this host to shell variables
|
||||
Read IP addresses of controlplane nodes and this host to shell variables
|
||||
|
||||
```bash
|
||||
MASTER_1=$(dig +short master-1)
|
||||
MASTER_2=$(dig +short master-2)
|
||||
CONTROL01=$(dig +short controlplane01)
|
||||
CONTROL02=$(dig +short controlplane02)
|
||||
LOADBALANCER=$(dig +short loadbalancer)
|
||||
```
|
||||
|
||||
Create HAProxy configuration to listen on API server port on this host and distribute requests evently to the two master nodes.
|
||||
Create HAProxy configuration to listen on API server port on this host and distribute requests evently to the two controlplane nodes.
|
||||
|
||||
We configure it to operate as a [layer 4](https://en.wikipedia.org/wiki/Transport_layer) loadbalancer (using `mode tcp`), which means it forwards any traffic directly to the backends without doing anything like [SSL offloading](https://ssl2buy.com/wiki/ssl-offloading).
|
||||
|
||||
```bash
|
||||
cat <<EOF | sudo tee /etc/haproxy/haproxy.cfg
|
||||
@@ -289,14 +295,14 @@ frontend kubernetes
|
||||
bind ${LOADBALANCER}:6443
|
||||
option tcplog
|
||||
mode tcp
|
||||
default_backend kubernetes-master-nodes
|
||||
default_backend kubernetes-controlplane-nodes
|
||||
|
||||
backend kubernetes-master-nodes
|
||||
backend kubernetes-controlplane-nodes
|
||||
mode tcp
|
||||
balance roundrobin
|
||||
option tcp-check
|
||||
server master-1 ${MASTER_1}:6443 check fall 3 rise 2
|
||||
server master-2 ${MASTER_2}:6443 check fall 3 rise 2
|
||||
server controlplane01 ${CONTROL01}:6443 check fall 3 rise 2
|
||||
server controlplane02 ${CONTROL02}:6443 check fall 3 rise 2
|
||||
EOF
|
||||
```
|
||||
|
||||
@@ -311,24 +317,10 @@ sudo systemctl restart haproxy
|
||||
Make a HTTP request for the Kubernetes version info:
|
||||
|
||||
```bash
|
||||
curl https://${LOADBALANCER}:6443/version -k
|
||||
curl -k https://${LOADBALANCER}:6443/version
|
||||
```
|
||||
|
||||
> output
|
||||
This should output some details about the version and build information of the API server.
|
||||
|
||||
```
|
||||
{
|
||||
"major": "1",
|
||||
"minor": "24",
|
||||
"gitVersion": "${KUBE_VERSION}",
|
||||
"gitCommit": "aef86a93758dc3cb2c658dd9657ab4ad4afc21cb",
|
||||
"gitTreeState": "clean",
|
||||
"buildDate": "2022-07-13T14:23:26Z",
|
||||
"goVersion": "go1.18.3",
|
||||
"compiler": "gc",
|
||||
"platform": "linux/amd64"
|
||||
}
|
||||
```
|
||||
|
||||
Prev: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)<br>
|
||||
Next: [Installing CRI on the Kubernetes Worker Nodes](09-install-cri-workers.md)
|
||||
Next: [Installing CRI on the Kubernetes Worker Nodes](./09-install-cri-workers.md)<br>
|
||||
Prev: [Bootstrapping the etcd Cluster](./07-bootstrapping-etcd.md)
|
||||
|
||||
@@ -6,52 +6,90 @@ Reference: https://github.com/containerd/containerd/blob/main/docs/getting-start
|
||||
|
||||
### Download and Install Container Networking
|
||||
|
||||
The commands in this lab must be run on each worker instance: `worker-1`, and `worker-2`. Login to each controller instance using SSH Terminal.
|
||||
The commands in this lab must be run on each worker instance: `node01`, and `node02`. Login to each controller instance using SSH Terminal.
|
||||
|
||||
Here we will install the container runtime `containerd` from the Ubuntu distribution, and kubectl plus the CNI tools from the Kubernetes distribution. Kubectl is required on worker-2 to initialize kubeconfig files for the worker-node auto registration.
|
||||
Here we will install the container runtime `containerd` from the Ubuntu distribution, and kubectl plus the CNI tools from the Kubernetes distribution. Kubectl is required on `node02` to initialize kubeconfig files for the worker-node auto registration.
|
||||
|
||||
[//]: # (host:worker-1-worker-2)
|
||||
[//]: # (host:node01-node02)
|
||||
|
||||
You can perform this step with [tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux).
|
||||
|
||||
Set up the Kubernetes `apt` repository
|
||||
1. Update the apt package index and install packages needed to use the Kubernetes apt repository:
|
||||
```bash
|
||||
{
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y apt-transport-https ca-certificates curl
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
{
|
||||
KUBE_LATEST=$(curl -L -s https://dl.k8s.io/release/stable.txt | awk 'BEGIN { FS="." } { printf "%s.%s", $1, $2 }')
|
||||
1. Set up the required kernel modules and make them persistent
|
||||
```bash
|
||||
{
|
||||
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
|
||||
overlay
|
||||
br_netfilter
|
||||
EOF
|
||||
|
||||
sudo mkdir -p /etc/apt/keyrings
|
||||
curl -fsSL https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
|
||||
sudo modprobe overlay
|
||||
sudo modprobe br_netfilter
|
||||
}
|
||||
```
|
||||
|
||||
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
||||
}
|
||||
```
|
||||
1. Set the required kernel parameters and make them persistent
|
||||
```bash
|
||||
{
|
||||
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
EOF
|
||||
|
||||
Install `containerd` and CNI tools, first refreshing `apt` repos to get up to date versions.
|
||||
sudo sysctl --system
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
{
|
||||
sudo apt update
|
||||
sudo apt install -y containerd kubernetes-cni kubectl ipvsadm ipset
|
||||
}
|
||||
```
|
||||
1. Determine latest version of Kubernetes and store in a shell variable
|
||||
|
||||
Set up `containerd` configuration to enable systemd Cgroups
|
||||
```bash
|
||||
KUBE_LATEST=$(curl -L -s https://dl.k8s.io/release/stable.txt | awk 'BEGIN { FS="." } { printf "%s.%s", $1, $2 }')
|
||||
```
|
||||
|
||||
```bash
|
||||
{
|
||||
sudo mkdir -p /etc/containerd
|
||||
1. Download the Kubernetes public signing key
|
||||
```bash
|
||||
{
|
||||
sudo mkdir -p /etc/apt/keyrings
|
||||
curl -fsSL https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
|
||||
}
|
||||
```
|
||||
|
||||
containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
|
||||
}
|
||||
```
|
||||
1. Add the Kubernetes apt repository
|
||||
```bash
|
||||
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
||||
```
|
||||
|
||||
Now restart `containerd` to read the new configuration
|
||||
1. Install the container runtime and CNI components
|
||||
```bash
|
||||
sudo apt update
|
||||
sudo apt-get install -y containerd kubernetes-cni kubectl ipvsadm ipset
|
||||
```
|
||||
|
||||
```bash
|
||||
sudo systemctl restart containerd
|
||||
```
|
||||
1. Configure the container runtime to use systemd Cgroups. This part is the bit many students miss, and if not done results in a controlplane that comes up, then all the pods start crashlooping. `kubectl` will also fail with an error like `The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?`
|
||||
|
||||
1. Create default configuration and pipe it through `sed` to correctly set Cgroup parameter.
|
||||
|
||||
```bash
|
||||
{
|
||||
sudo mkdir -p /etc/containerd
|
||||
containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
|
||||
}
|
||||
```
|
||||
|
||||
1. Restart containerd
|
||||
|
||||
```bash
|
||||
sudo systemctl restart containerd
|
||||
```
|
||||
|
||||
|
||||
Prev: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)</br>
|
||||
Next: [Bootstrapping the Kubernetes Worker Nodes](10-bootstrapping-kubernetes-workers.md)
|
||||
Next: [Bootstrapping the Kubernetes Worker Nodes](./10-bootstrapping-kubernetes-workers.md)</br>
|
||||
Prev: [Bootstrapping the Kubernetes Control Plane](./08-bootstrapping-kubernetes-controllers.md)
|
||||
|
||||
@@ -8,8 +8,8 @@ We will now install the kubernetes components
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The Certificates and Configuration are created on `master-1` node and then copied over to workers using `scp`.
|
||||
Once this is done, the commands are to be run on first worker instance: `worker-1`. Login to first worker instance using SSH Terminal.
|
||||
The Certificates and Configuration are created on `controlplane01` node and then copied over to workers using `scp`.
|
||||
Once this is done, the commands are to be run on first worker instance: `node01`. Login to first worker instance using SSH Terminal.
|
||||
|
||||
### Provisioning Kubelet Client Certificates
|
||||
|
||||
@@ -17,16 +17,16 @@ Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/doc
|
||||
|
||||
Generate a certificate and private key for one worker node:
|
||||
|
||||
On `master-1`:
|
||||
On `controlplane01`:
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
```bash
|
||||
WORKER_1=$(dig +short worker-1)
|
||||
NODE01=$(dig +short node01)
|
||||
```
|
||||
|
||||
```bash
|
||||
cat > openssl-worker-1.cnf <<EOF
|
||||
cat > openssl-node01.cnf <<EOF
|
||||
[req]
|
||||
req_extensions = v3_req
|
||||
distinguished_name = req_distinguished_name
|
||||
@@ -36,27 +36,27 @@ basicConstraints = CA:FALSE
|
||||
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
DNS.1 = worker-1
|
||||
IP.1 = ${WORKER_1}
|
||||
DNS.1 = node01
|
||||
IP.1 = ${NODE01}
|
||||
EOF
|
||||
|
||||
openssl genrsa -out worker-1.key 2048
|
||||
openssl req -new -key worker-1.key -subj "/CN=system:node:worker-1/O=system:nodes" -out worker-1.csr -config openssl-worker-1.cnf
|
||||
openssl x509 -req -in worker-1.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out worker-1.crt -extensions v3_req -extfile openssl-worker-1.cnf -days 1000
|
||||
openssl genrsa -out node01.key 2048
|
||||
openssl req -new -key node01.key -subj "/CN=system:node:node01/O=system:nodes" -out node01.csr -config openssl-node01.cnf
|
||||
openssl x509 -req -in node01.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out node01.crt -extensions v3_req -extfile openssl-node01.cnf -days 1000
|
||||
```
|
||||
|
||||
Results:
|
||||
|
||||
```
|
||||
worker-1.key
|
||||
worker-1.crt
|
||||
node01.key
|
||||
node01.crt
|
||||
```
|
||||
|
||||
### The kubelet Kubernetes Configuration File
|
||||
|
||||
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
|
||||
|
||||
Get the kub-api server load-balancer IP.
|
||||
Get the kube-api server load-balancer IP.
|
||||
|
||||
```bash
|
||||
LOADBALANCER=$(dig +short loadbalancer)
|
||||
@@ -64,55 +64,55 @@ LOADBALANCER=$(dig +short loadbalancer)
|
||||
|
||||
Generate a kubeconfig file for the first worker node.
|
||||
|
||||
On `master-1`:
|
||||
On `controlplane01`:
|
||||
```bash
|
||||
{
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=/var/lib/kubernetes/pki/ca.crt \
|
||||
--server=https://${LOADBALANCER}:6443 \
|
||||
--kubeconfig=worker-1.kubeconfig
|
||||
--kubeconfig=node01.kubeconfig
|
||||
|
||||
kubectl config set-credentials system:node:worker-1 \
|
||||
--client-certificate=/var/lib/kubernetes/pki/worker-1.crt \
|
||||
--client-key=/var/lib/kubernetes/pki/worker-1.key \
|
||||
--kubeconfig=worker-1.kubeconfig
|
||||
kubectl config set-credentials system:node:node01 \
|
||||
--client-certificate=/var/lib/kubernetes/pki/node01.crt \
|
||||
--client-key=/var/lib/kubernetes/pki/node01.key \
|
||||
--kubeconfig=node01.kubeconfig
|
||||
|
||||
kubectl config set-context default \
|
||||
--cluster=kubernetes-the-hard-way \
|
||||
--user=system:node:worker-1 \
|
||||
--kubeconfig=worker-1.kubeconfig
|
||||
--user=system:node:node01 \
|
||||
--kubeconfig=node01.kubeconfig
|
||||
|
||||
kubectl config use-context default --kubeconfig=worker-1.kubeconfig
|
||||
kubectl config use-context default --kubeconfig=node01.kubeconfig
|
||||
}
|
||||
```
|
||||
|
||||
Results:
|
||||
|
||||
```
|
||||
worker-1.kubeconfig
|
||||
node01.kubeconfig
|
||||
```
|
||||
|
||||
### Copy certificates, private keys and kubeconfig files to the worker node:
|
||||
On `master-1`:
|
||||
On `controlplane01`:
|
||||
|
||||
```bash
|
||||
scp ca.crt worker-1.crt worker-1.key worker-1.kubeconfig worker-1:~/
|
||||
scp ca.crt node01.crt node01.key node01.kubeconfig node01:~/
|
||||
```
|
||||
|
||||
|
||||
### Download and Install Worker Binaries
|
||||
|
||||
All the following commands from here until the [verification](#verification) step must be run on `worker-1`
|
||||
All the following commands from here until the [verification](#verification) step must be run on `node01`
|
||||
|
||||
[//]: # (host:worker-1)
|
||||
[//]: # (host:node01)
|
||||
|
||||
|
||||
```bash
|
||||
KUBE_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
|
||||
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-proxy \
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kubelet
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kube-proxy \
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kubelet
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/releases/download/#binaries
|
||||
@@ -138,7 +138,7 @@ Install the worker binaries:
|
||||
|
||||
### Configure the Kubelet
|
||||
|
||||
On worker-1:
|
||||
On `node01`:
|
||||
|
||||
Copy keys and config to correct directories and secure
|
||||
|
||||
@@ -214,6 +214,7 @@ Requires=containerd.service
|
||||
ExecStart=/usr/local/bin/kubelet \\
|
||||
--config=/var/lib/kubelet/kubelet-config.yaml \\
|
||||
--kubeconfig=/var/lib/kubelet/kubelet.kubeconfig \\
|
||||
--node-ip=${PRIMARY_IP} \\
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
@@ -225,7 +226,7 @@ EOF
|
||||
|
||||
### Configure the Kubernetes Proxy
|
||||
|
||||
On worker-1:
|
||||
On `node01`:
|
||||
|
||||
```bash
|
||||
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/
|
||||
@@ -241,7 +242,7 @@ kind: KubeProxyConfiguration
|
||||
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
||||
clientConnection:
|
||||
kubeconfig: /var/lib/kube-proxy/kube-proxy.kubeconfig
|
||||
mode: ipvs
|
||||
mode: iptables
|
||||
clusterCIDR: ${POD_CIDR}
|
||||
EOF
|
||||
```
|
||||
@@ -267,7 +268,7 @@ EOF
|
||||
|
||||
## Optional - Check Certificates and kubeconfigs
|
||||
|
||||
At `worker-1` node, run the following, selecting option 4
|
||||
At `node01` node, run the following, selecting option 4
|
||||
|
||||
[//]: # (command:./cert_verify.sh 4)
|
||||
|
||||
@@ -278,7 +279,8 @@ At `worker-1` node, run the following, selecting option 4
|
||||
|
||||
### Start the Worker Services
|
||||
|
||||
On worker-1:
|
||||
On `node01`:
|
||||
|
||||
```bash
|
||||
{
|
||||
sudo systemctl daemon-reload
|
||||
@@ -287,28 +289,28 @@ On worker-1:
|
||||
}
|
||||
```
|
||||
|
||||
> Remember to run the above commands on worker node: `worker-1`
|
||||
> Remember to run the above commands on worker node: `node01`
|
||||
|
||||
## Verification
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
Now return to the `master-1` node.
|
||||
Now return to the `controlplane01` node.
|
||||
|
||||
List the registered Kubernetes nodes from the master node:
|
||||
List the registered Kubernetes nodes from the controlplane node:
|
||||
|
||||
```bash
|
||||
kubectl get nodes --kubeconfig admin.kubeconfig
|
||||
```
|
||||
|
||||
> output
|
||||
Output will be similar to
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
worker-1 NotReady <none> 93s v1.28.4
|
||||
node01 NotReady <none> 93s v1.28.4
|
||||
```
|
||||
|
||||
The node is not ready as we have not yet installed pod networking. This comes later.
|
||||
|
||||
Prev: [Installing CRI on the Kubernetes Worker Nodes](09-install-cri-workers.md)<br>
|
||||
Next: [TLS Bootstrapping Kubernetes Workers](11-tls-bootstrapping-kubernetes-workers.md)
|
||||
Next: [TLS Bootstrapping Kubernetes Workers](./11-tls-bootstrapping-kubernetes-workers.md)<br>
|
||||
Prev: [Installing CRI on the Kubernetes Worker Nodes](./09-install-cri-workers.md)
|
||||
|
||||
@@ -6,7 +6,7 @@ In the previous step we configured a worker node by
|
||||
- Creating a kube-config file using this certificate by ourself
|
||||
- Everytime the certificate expires we must follow the same process of updating the certificate by ourself
|
||||
|
||||
This is not a practical approach when you have 1000s of nodes in the cluster, and nodes dynamically being added and removed from the cluster. With TLS boostrapping:
|
||||
This is not a practical approach when you could have 1000s of nodes in the cluster, and nodes dynamically being added and removed from the cluster. With TLS boostrapping:
|
||||
|
||||
- The Nodes can generate certificate key pairs by themselves
|
||||
- The Nodes can generate certificate signing request by themselves
|
||||
@@ -41,11 +41,11 @@ So let's get started!
|
||||
|
||||
> Note: We have already configured these in lab 8 in this course
|
||||
|
||||
# Step 1 Create the Boostrap Token to be used by Nodes(Kubelets) to invoke Certificate API
|
||||
# Step 1 Create the Boostrap Token to be used by Nodes (Kubelets) to invoke Certificate API
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
Run the following steps on `master-1`
|
||||
Run the following steps on `controlplane01`
|
||||
|
||||
For the workers(kubelet) to access the Certificates API, they need to authenticate to the kubernetes api-server first. For this we create a [Bootstrap Token](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) to be used by the kubelet
|
||||
|
||||
@@ -100,7 +100,7 @@ Once this is created the token to be used for authentication is `07401b.f395accd
|
||||
|
||||
Reference: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#bootstrap-token-secret-format
|
||||
|
||||
## Step 2 Authorize workers(kubelets) to create CSR
|
||||
## Step 2 Authorize nodes (kubelets) to create CSR
|
||||
|
||||
Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet
|
||||
|
||||
@@ -135,7 +135,7 @@ kubectl create -f csrs-for-bootstrapping.yaml --kubeconfig admin.kubeconfig
|
||||
```
|
||||
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#authorize-kubelet-to-create-csr
|
||||
|
||||
## Step 3 Authorize workers(kubelets) to approve CSRs
|
||||
## Step 3 Authorize nodes (kubelets) to approve CSRs
|
||||
|
||||
```bash
|
||||
kubectl create clusterrolebinding auto-approve-csrs-for-group \
|
||||
@@ -168,7 +168,7 @@ kubectl create -f auto-approve-csrs-for-group.yaml --kubeconfig admin.kubeconfig
|
||||
|
||||
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#approval
|
||||
|
||||
## Step 4 Authorize workers(kubelets) to Auto Renew Certificates on expiration
|
||||
## Step 4 Authorize nodes (kubelets) to Auto Renew Certificates on expiration
|
||||
|
||||
We now create the Cluster Role Binding required for the nodes to automatically renew the certificates on expiry. Note that we are NOT using the **system:bootstrappers** group here any more. Since by the renewal period, we believe the node would be bootstrapped and part of the cluster already. All nodes are part of the **system:nodes** group.
|
||||
|
||||
@@ -206,9 +206,9 @@ Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-b
|
||||
|
||||
## Step 5 Configure the Binaries on the Worker node
|
||||
|
||||
Going forward all activities are to be done on the `worker-2` node until [step 11](#step-11-approve-server-csr).
|
||||
Going forward all activities are to be done on the `node02` node until [step 11](#step-11-approve-server-csr).
|
||||
|
||||
[//]: # (host:worker-2)
|
||||
[//]: # (host:node02)
|
||||
|
||||
### Download and Install Worker Binaries
|
||||
|
||||
@@ -218,8 +218,8 @@ Note that kubectl is required here to assist with creating the boostrap kubeconf
|
||||
KUBE_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
|
||||
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-proxy \
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kubelet
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kube-proxy \
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kubelet
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/releases/download/#binaries
|
||||
@@ -256,10 +256,10 @@ Move the certificates and secure them.
|
||||
|
||||
It is now time to configure the second worker to TLS bootstrap using the token we generated
|
||||
|
||||
For worker-1 we started by creating a kubeconfig file with the TLS certificates that we manually generated.
|
||||
For `node01` we started by creating a kubeconfig file with the TLS certificates that we manually generated.
|
||||
Here, we don't have the certificates yet. So we cannot create a kubeconfig file. Instead we create a bootstrap-kubeconfig file with information about the token we created.
|
||||
|
||||
This is to be done on the `worker-2` node. Note that now we have set up the load balancer to provide high availibilty across the API servers, we point kubelet to the load balancer.
|
||||
This is to be done on the `node02` node. Note that now we have set up the load balancer to provide high availibilty across the API servers, we point kubelet to the load balancer.
|
||||
|
||||
Set up some shell variables for nodes and services we will require in the following configurations:
|
||||
|
||||
@@ -367,6 +367,7 @@ ExecStart=/usr/local/bin/kubelet \\
|
||||
--config=/var/lib/kubelet/kubelet-config.yaml \\
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
||||
--cert-dir=/var/lib/kubelet/pki/ \\
|
||||
--node-ip=${PRIMARY_IP} \\
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
@@ -404,7 +405,7 @@ kind: KubeProxyConfiguration
|
||||
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
||||
clientConnection:
|
||||
kubeconfig: /var/lib/kube-proxy/kube-proxy.kubeconfig
|
||||
mode: ipvs
|
||||
mode: iptables
|
||||
clusterCIDR: ${POD_CIDR}
|
||||
EOF
|
||||
```
|
||||
@@ -431,7 +432,7 @@ EOF
|
||||
|
||||
## Step 10 Start the Worker Services
|
||||
|
||||
On worker-2:
|
||||
On `node02`:
|
||||
|
||||
```bash
|
||||
{
|
||||
@@ -440,11 +441,11 @@ On worker-2:
|
||||
sudo systemctl start kubelet kube-proxy
|
||||
}
|
||||
```
|
||||
> Remember to run the above commands on worker node: `worker-2`
|
||||
> Remember to run the above commands on worker node: `node02`
|
||||
|
||||
### Optional - Check Certificates and kubeconfigs
|
||||
|
||||
At `worker-2` node, run the following, selecting option 5
|
||||
At `node02` node, run the following, selecting option 5
|
||||
|
||||
[//]: # (command:sleep 5)
|
||||
[//]: # (command:./cert_verify.sh 5)
|
||||
@@ -456,11 +457,11 @@ At `worker-2` node, run the following, selecting option 5
|
||||
|
||||
## Step 11 Approve Server CSR
|
||||
|
||||
Now, go back to `master-1` and approve the pending kubelet-serving certificate
|
||||
Now, go back to `controlplane01` and approve the pending kubelet-serving certificate
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
[//]: # (command:sudo apt install -y jq)
|
||||
[//]: # (command:kubectl certificate approve --kubeconfig admin.kubeconfig $(kubectl get csr --kubeconfig admin.kubeconfig -o json | jq -r '.items | .[] | select(.spec.username == "system:node:worker-2") | .metadata.name'))
|
||||
[//]: # (command:kubectl certificate approve --kubeconfig admin.kubeconfig $(kubectl get csr --kubeconfig admin.kubeconfig -o json | jq -r '.items | .[] | select(.spec.username == "system:node:node02") | .metadata.name'))
|
||||
|
||||
```bash
|
||||
kubectl get csr --kubeconfig admin.kubeconfig
|
||||
@@ -470,7 +471,7 @@ kubectl get csr --kubeconfig admin.kubeconfig
|
||||
|
||||
```
|
||||
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
|
||||
csr-7k8nh 85s kubernetes.io/kubelet-serving system:node:worker-2 <none> Pending
|
||||
csr-7k8nh 85s kubernetes.io/kubelet-serving system:node:node02 <none> Pending
|
||||
csr-n7z8p 98s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:07401b <none> Approved,Issued
|
||||
```
|
||||
|
||||
@@ -487,19 +488,21 @@ Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-b
|
||||
|
||||
## Verification
|
||||
|
||||
List the registered Kubernetes nodes from the master node:
|
||||
List the registered Kubernetes nodes from the controlplane node:
|
||||
|
||||
```bash
|
||||
kubectl get nodes --kubeconfig admin.kubeconfig
|
||||
```
|
||||
|
||||
> output
|
||||
Output will be similar to
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
worker-1 NotReady <none> 93s v1.28.4
|
||||
worker-2 NotReady <none> 93s v1.28.4
|
||||
node01 NotReady <none> 93s v1.28.4
|
||||
node02 NotReady <none> 93s v1.28.4
|
||||
```
|
||||
|
||||
Prev: [Bootstrapping the Kubernetes Worker Nodes](10-bootstrapping-kubernetes-workers.md)</br>
|
||||
Next: [Configuring Kubectl](12-configuring-kubectl.md)
|
||||
Nodes are still not yet ready. As previously mentioned, this is expected.
|
||||
|
||||
Next: [Configuring Kubectl](./12-configuring-kubectl.md)</br>
|
||||
Prev: [Bootstrapping the Kubernetes Worker Nodes](./10-bootstrapping-kubernetes-workers.md)
|
||||
|
||||
@@ -8,9 +8,9 @@ In this lab you will generate a kubeconfig file for the `kubectl` command line u
|
||||
|
||||
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
On `master-1`
|
||||
On `controlplane01`
|
||||
|
||||
Get the kube-api server load-balancer IP.
|
||||
|
||||
@@ -50,7 +50,7 @@ Check the health of the remote Kubernetes cluster:
|
||||
kubectl get componentstatuses
|
||||
```
|
||||
|
||||
> output
|
||||
Output will be similar to this. It may or may not list both etcd instances, however this is OK if you verified correct installation of etcd in lab 7.
|
||||
|
||||
```
|
||||
Warning: v1 ComponentStatus is deprecated in v1.19+
|
||||
@@ -71,9 +71,9 @@ kubectl get nodes
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
worker-1 NotReady <none> 118s v1.28.4
|
||||
worker-2 NotReady <none> 118s v1.28.4
|
||||
node01 NotReady <none> 118s v1.28.4
|
||||
node02 NotReady <none> 118s v1.28.4
|
||||
```
|
||||
|
||||
Prev: [TLS Bootstrapping Kubernetes Workers](11-tls-bootstrapping-kubernetes-workers.md)</br>
|
||||
Next: [Deploy Pod Networking](13-configure-pod-networking.md)
|
||||
Next: [Deploy Pod Networking](./13-configure-pod-networking.md)</br>
|
||||
Prev: [TLS Bootstrapping Kubernetes Workers](./11-tls-bootstrapping-kubernetes-workers.md)
|
||||
|
||||
@@ -7,30 +7,32 @@ We chose to use CNI - [weave](https://www.weave.works/docs/net/latest/kubernetes
|
||||
|
||||
### Deploy Weave Network
|
||||
|
||||
Deploy weave network. Run only once on the `master-1` node. You will see a warning, but this is OK.
|
||||
Some of you may have noticed the announcement that WeaveWorks is no longer trading. At this time, this does not mean that Weave is not a valid CNI. WeaveWorks software has always been and remains to be open source, and as such is still useable. It just means that the company is no longer providing updates. While it continues to be compatible with Kubernetes, we will continue to use it as the other options (e.g. Calico, Cilium) require far more configuration steps.
|
||||
|
||||
[//]: # (host:master-1)
|
||||
Deploy weave network. Run only once on the `controlplane01` node. You may see a warning, but this is OK.
|
||||
|
||||
On `master-1`
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
On `controlplane01`
|
||||
|
||||
```bash
|
||||
kubectl apply -f "https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml"
|
||||
|
||||
```
|
||||
|
||||
Weave uses POD CIDR of `10.244.0.0/16` by default.
|
||||
It may take up to 60 seconds for the Weave pods to be ready.
|
||||
|
||||
## Verification
|
||||
|
||||
[//]: # (command:kubectl rollout status daemonset weave-net -n kube-system --timeout=90s)
|
||||
|
||||
List the registered Kubernetes nodes from the master node:
|
||||
List the registered Kubernetes nodes from the controlplane node:
|
||||
|
||||
```bash
|
||||
kubectl get pods -n kube-system
|
||||
```
|
||||
|
||||
> output
|
||||
Output will be similar to
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
@@ -38,21 +40,21 @@ weave-net-58j2j 2/2 Running 0 89s
|
||||
weave-net-rr5dk 2/2 Running 0 89s
|
||||
```
|
||||
|
||||
Once the Weave pods are fully running which might take up to 60 seconds, the nodes should be ready
|
||||
Once the Weave pods are fully running, the nodes should be ready.
|
||||
|
||||
```bash
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
> Output
|
||||
Output will be similar to
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
worker-1 Ready <none> 4m11s v1.28.4
|
||||
worker-2 Ready <none> 2m49s v1.28.4
|
||||
node01 Ready <none> 4m11s v1.28.4
|
||||
node02 Ready <none> 2m49s v1.28.4
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/#install-the-weave-net-addon
|
||||
|
||||
Prev: [Configuring Kubectl](12-configuring-kubectl.md)</br>
|
||||
Next: [Kube API Server to Kubelet Connectivity](14-kube-apiserver-to-kubelet.md)
|
||||
Next: [Kube API Server to Kubelet Connectivity](./14-kube-apiserver-to-kubelet.md)</br>
|
||||
Prev: [Configuring Kubectl](./12-configuring-kubectl.md)
|
||||
|
||||
@@ -4,9 +4,9 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
|
||||
|
||||
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
Run the below on the `master-1` node.
|
||||
Run the below on the `controlplane01` node.
|
||||
|
||||
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
||||
|
||||
@@ -58,5 +58,5 @@ EOF
|
||||
```
|
||||
Reference: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
|
||||
|
||||
Prev: [Deploy Pod Networking](13-configure-pod-networking.md)</br>
|
||||
Next: [DNS Addon](15-dns-addon.md)
|
||||
Next: [DNS Addon](./15-dns-addon.md)</br>
|
||||
Prev: [Deploy Pod Networking](./13-configure-pod-networking.md)
|
||||
|
||||
@@ -4,11 +4,11 @@ In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts
|
||||
|
||||
## The DNS Cluster Add-on
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
Deploy the `coredns` cluster add-on:
|
||||
|
||||
Note that if you have [changed the service CIDR range](./01-prerequisites.md#service-network) and thus this file, you will need to save your copy onto `master-1` (paste to vi, then save) and apply that.
|
||||
Note that if you have [changed the service CIDR range](./01-prerequisites.md#service-network) and thus this file, you will need to save your copy onto `controlplane01` (paste to vi, then save) and apply that.
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/mmumshad/kubernetes-the-hard-way/master/deployments/coredns.yaml
|
||||
@@ -83,5 +83,5 @@ Name: kubernetes
|
||||
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
|
||||
```
|
||||
|
||||
Prev: [Kube API Server to Kubelet Connectivity](14-kube-apiserver-to-kubelet.md)</br>
|
||||
Next: [Smoke Test](16-smoke-test.md)
|
||||
Next: [Smoke Test](./16-smoke-test.md)</br>
|
||||
Prev: [Kube API Server to Kubelet Connectivity](./14-kube-apiserver-to-kubelet.md)
|
||||
|
||||
@@ -4,7 +4,7 @@ In this lab you will complete a series of tasks to ensure your Kubernetes cluste
|
||||
|
||||
## Data Encryption
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
In this section you will verify the ability to [encrypt secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted).
|
||||
|
||||
@@ -61,7 +61,7 @@ In this section you will verify the ability to create and manage [Deployments](h
|
||||
Create a deployment for the [nginx](https://nginx.org/en/) web server:
|
||||
|
||||
```bash
|
||||
kubectl create deployment nginx --image=nginx:1.23.1
|
||||
kubectl create deployment nginx --image=nginx:alpine
|
||||
```
|
||||
|
||||
[//]: # (command:kubectl wait deployment -n default nginx --for condition=Available=True --timeout=90s)
|
||||
@@ -89,6 +89,7 @@ Create a service to expose deployment nginx on node ports.
|
||||
kubectl expose deploy nginx --type=NodePort --port 80
|
||||
```
|
||||
|
||||
[//]: # (command:sleep 2)
|
||||
|
||||
```bash
|
||||
PORT_NUMBER=$(kubectl get svc -l app=nginx -o jsonpath="{.items[0].spec.ports[0].nodePort}")
|
||||
@@ -97,8 +98,8 @@ PORT_NUMBER=$(kubectl get svc -l app=nginx -o jsonpath="{.items[0].spec.ports[0]
|
||||
Test to view NGINX page
|
||||
|
||||
```bash
|
||||
curl http://worker-1:$PORT_NUMBER
|
||||
curl http://worker-2:$PORT_NUMBER
|
||||
curl http://node01:$PORT_NUMBER
|
||||
curl http://node02:$PORT_NUMBER
|
||||
```
|
||||
|
||||
> output
|
||||
@@ -160,5 +161,5 @@ kubectl delete service -n default nginx
|
||||
kubectl delete deployment -n default nginx
|
||||
```
|
||||
|
||||
Prev: [DNS Addon](15-dns-addon.md)</br>
|
||||
Next: [End to End Tests](17-e2e-tests.md)
|
||||
Next: [End to End Tests](./17-e2e-tests.md)</br>
|
||||
Prev: [DNS Addon](./15-dns-addon.md)
|
||||
|
||||
@@ -1,17 +1,21 @@
|
||||
# Run End-to-End Tests
|
||||
|
||||
Optional Lab.
|
||||
|
||||
Observations by Alistair (KodeKloud):
|
||||
|
||||
Depending on your computer, you may have varying success with these. I have found them to run much more smoothly on a 12 core Intel(R) Core(TM) i7-7800X Desktop Processor (circa 2017), than on a 20 core Intel(R) Core(TM) i7-12700H Laptop processor (circa 2022) - both machines having 32GB RAM and both machines running the same version of VirtualBox. On the latter, it tends to destabilize the cluster resulting in timeouts in the tests. This *may* be a processor issue in that laptop processors are not really designed to take the kind of abuse that'll be thrown by the tests at a kube cluster that really should be run on a Server processor. Laptop processors do odd things for power conservation like constantly varying the clock speed and mixing "performance" and "efficiency" cores, even when the laptop is plugged in, and this could be causing synchronization issues with the goroutines running in the kube components. If anyone has a definitive explanation for this, please do post in the kubernetes-the-hard-way Slack channel.
|
||||
Depending on your computer, you may have varying success with these. I have found them to run much more smoothly on a 12 core Intel(R) Core(TM) i7-7800X Desktop Processor (circa 2017), than on a 20 core Intel(R) Core(TM) i7-12700H Laptop processor (circa 2022) - both machines having 32GB RAM and both machines running the same version of VirtualBox. On the latter, it tends to destabilize the cluster resulting in timeouts in the tests. This *may* be a processor issue in that laptop processors are not really designed to take the kind of abuse that'll be thrown by the tests at a kube cluster that really should be run on a Server processor. Laptop processors do odd things for power conservation like constantly varying the clock speed and mixing "performance" and "efficiency" cores, even when the laptop is plugged in, and this could be causing synchronization issues with the goroutines running in the kube components. If anyone has a definitive explanation for this, please do post in the Kubernetes section of the [Community Forum](https://kodekloud.com/community/c/kubernetes/6).
|
||||
|
||||
|
||||
Test suite should be installed to and run from `controlplane01`
|
||||
|
||||
## Install latest Go
|
||||
|
||||
```bash
|
||||
GO_VERSION=$(curl -s 'https://go.dev/VERSION?m=text' | head -1)
|
||||
wget "https://dl.google.com/go/${GO_VERSION}.linux-amd64.tar.gz"
|
||||
wget "https://dl.google.com/go/${GO_VERSION}.linux-${ARCH}.tar.gz"
|
||||
|
||||
sudo tar -C /usr/local -xzf ${GO_VERSION}.linux-amd64.tar.gz
|
||||
sudo tar -C /usr/local -xzf ${GO_VERSION}.linux-${ARCH}.tar.gz
|
||||
|
||||
sudo ln -s /usr/local/go/bin/go /usr/local/bin/go
|
||||
sudo ln -s /usr/local/go/bin/gofmt /usr/local/bin/gofmt
|
||||
@@ -32,7 +36,7 @@ sudo snap install google-cloud-cli --classic
|
||||
|
||||
## Run test
|
||||
|
||||
Here we set up a couple of environment variables to supply arguments to the test package - the version of our cluster and the number of CPUs on `master-1` to aid with test parallelization.
|
||||
Here we set up a couple of environment variables to supply arguments to the test package - the version of our cluster and the number of CPUs on `controlplane01` to aid with test parallelization.
|
||||
|
||||
Then we invoke the test package
|
||||
|
||||
@@ -42,25 +46,19 @@ NUM_CPU=$(cat /proc/cpuinfo | grep '^processor' | wc -l)
|
||||
|
||||
cd ~
|
||||
kubetest2 noop --kubeconfig ${PWD}/.kube/config --test=ginkgo -- \
|
||||
--focus-regex='\[Conformance\]' --test-package-version $KUBE_VERSION --logtostderr --parallel $NUM_CPU
|
||||
--focus-regex='\[Conformance\]' --test-package-version $KUBE_VERSION --parallel $NUM_CPU
|
||||
```
|
||||
|
||||
While this is running, you can open an additional session on `master-1` from your workstation and watch the activity in the cluster
|
||||
|
||||
```
|
||||
vagrant ssh master-1
|
||||
```
|
||||
|
||||
then
|
||||
While this is running, you can open an additional session on `controlplane01` from your workstation and watch the activity in the cluster -
|
||||
|
||||
```
|
||||
watch kubectl get all -A
|
||||
```
|
||||
|
||||
Observations by Alistair (KodeKloud):
|
||||
Further observations by Alistair (KodeKloud):
|
||||
|
||||
This should take up to an hour to run. The number of tests run and passed will be displayed at the end. Expect some failures!
|
||||
This could take between an hour and several hours to run depending on your system. The number of tests run and passed will be displayed at the end. Expect some failures!
|
||||
|
||||
I am not able to say exactly why the failed tests fail. It would take days to go though the truly enormous test code base to determine why the tests that fail do so.
|
||||
I am not able to say exactly why the failed tests fail over and above the assumptions above. It would take days to go though the truly enormous test code base to determine why the tests that fail do so.
|
||||
|
||||
Prev: [Smoke Test](16-smoke-test.md)
|
||||
Prev: [Smoke Test](./16-smoke-test.md)
|
||||
@@ -1,9 +1,9 @@
|
||||
# Differences between original and this solution
|
||||
|
||||
* Platform: I use VirtualBox to setup a local cluster, the original one uses GCP.
|
||||
* Nodes: 2 master and 2 worker vs 2 master and 3 worker nodes.
|
||||
* Nodes: 2 controlplane and 2 worker vs 2 controlplane and 3 worker nodes.
|
||||
* Configure 1 worker node normally and the second one with TLS bootstrap.
|
||||
* Node names: I use worker-1 worker-2 instead of worker-0 worker-1.
|
||||
* Node names: I use node01 node02 instead of worker-0 worker-1.
|
||||
* IP Addresses: I use statically assigned IPs on private network.
|
||||
* Certificate file names: I use \<name\>.crt for public certificate and \<name\>.key for private key file. Whereas original one uses \<name\>-.pem for certificate and \<name\>-key.pem for private key.
|
||||
* I generate separate certificates for etcd-server instead of using kube-apiserver.
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 100 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 75 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 116 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 44 KiB |
@@ -1,10 +1,10 @@
|
||||
# Verify Certificates in Master-1/2 & Worker-1
|
||||
# Verify Certificates in controlplane-1/2 & Worker-1
|
||||
|
||||
> Note: This script is only intended to work with a kubernetes cluster setup following instructions from this repository. It is not a generic script that works for all kubernetes clusters. Feel free to send in PRs with improvements.
|
||||
|
||||
This script was developed to assist the verification of certificates for each Kubernetes component as part of building the cluster. This script may be executed as soon as you have completed the Lab steps up to [Bootstrapping the Kubernetes Worker Nodes](./09-bootstrapping-kubernetes-workers.md). The script is named as `cert_verify.sh` and it is available at `/home/vagrant` directory of master-1 , master-2 and worker-1 nodes. If it's not already available there copy the script to the nodes from [here](../vagrant/ubuntu/cert_verify.sh).
|
||||
This script was developed to assist the verification of certificates for each Kubernetes component as part of building the cluster. This script may be executed as soon as you have completed the Lab steps up to [Bootstrapping the Kubernetes Worker Nodes](./09-bootstrapping-kubernetes-workers.md). The script is named as `cert_verify.sh` and it is available at `/home/vagrant` directory of controlplane01 , controlplane02 and node01 nodes. If it's not already available there copy the script to the nodes from [here](../vagrant/ubuntu/cert_verify.sh).
|
||||
|
||||
It is important that the script execution needs to be done by following commands after logging into the respective virtual machines [ whether it is master-1 / master-2 / worker-1 ] via SSH.
|
||||
It is important that the script execution needs to be done by following commands after logging into the respective virtual machines [ whether it is controlplane01 / controlplane02 / node01 ] via SSH.
|
||||
|
||||
```bash
|
||||
cd ~
|
||||
|
||||
Reference in New Issue
Block a user