Refresh and add Apple Silicon (#338)
* Delete CKA stuff. It's covered in CKA repo * Rename nodes * Cluster up again * Update issue template * Update README * Begin rearranging docs * Update links * Initial mac instructions * iterm2 image * update ssh-copy-id to be cross platform * remove vagrant specific * Apple scripts WIP * Add var for architecture * order input files * Apple build working! * auto-locate docs * install sshpass * Set execute bit * apple done! * install sshpass * edits * Corrections * kube version output * Adjustments * Adjustmentspull/634/head
|
@ -1,5 +1,5 @@
|
|||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: KodeKloud Slack
|
||||
url: kodekloud.slack.com
|
||||
about: Please use Slack to ask anything unrelated to Kubernetes the Hard Way.
|
||||
- name: KodeKloud Forum
|
||||
url: https://community.kodekloud.com/
|
||||
about: Please use the Communuity Forum to ask about anything unrelated to Kubernetes the Hard Way.
|
||||
|
|
49
README.md
|
@ -1,25 +1,25 @@
|
|||
> This tutorial is a modified version of the original developed by [Kelsey Hightower](https://github.com/kelseyhightower/kubernetes-the-hard-way).
|
||||
# Kubernetes The Hard Way
|
||||
|
||||
# Kubernetes The Hard Way On VirtualBox
|
||||
Updated: March 2024
|
||||
|
||||
**IMPORTANT** This currently does not work on Apple M1/M2. Oracle are yet to release a compatible version for these systems.
|
||||
|
||||
This tutorial walks you through setting up Kubernetes the hard way on a local machine using VirtualBox.
|
||||
This tutorial walks you through setting up Kubernetes the hard way on a local machine using a hypervisor.
|
||||
This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster.
|
||||
If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](http://kubernetes.io/docs/getting-started-guides/).
|
||||
|
||||
Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
|
||||
Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster. Note that the cluster when built will not be accessible from your laptop browser - that isn't what this is about. If you want a more useable cluster, try [one of these](https://github.com/kodekloudhub/certified-kubernetes-administrator-course/tree/master/kubeadm-clusters).
|
||||
|
||||
This tutorial is a modified version of the original developed by [Kelsey Hightower](https://github.com/kelseyhightower/kubernetes-the-hard-way).
|
||||
While the original one uses GCP as the platform to deploy kubernetes, we use VirtualBox and Vagrant to deploy a cluster on a local machine. If you prefer the cloud version, refer to the original one [here](https://github.com/kelseyhightower/kubernetes-the-hard-way)
|
||||
While the original one uses GCP as the platform to deploy kubernetes, we use a hypervisor to deploy a cluster on a local machine. If you prefer the cloud version, refer to the original one [here](https://github.com/kelseyhightower/kubernetes-the-hard-way)
|
||||
|
||||
> The results of this tutorial should *not* be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!<br/>Note that we are only building 2 masters here instead of the recommended 3 that `etcd` requires to maintain quorum. This is to save on resources, and simply to show how to load balance across more than one master.
|
||||
The results of this tutorial should *not* be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!<br/>Note that we are only building 2 controlplane nodes here instead of the recommended 3 that `etcd` requires to maintain quorum. This is to save on resources, and simply to show how to load balance across more than one controlplane node.
|
||||
|
||||
Please note that with this particular challenge, it is all about the minute detail. If you miss one tiny step anywhere along the way, it's going to break!
|
||||
### <font color="red">Before shouting "Help! It's not working!"</font>
|
||||
|
||||
Please note that with this particular challenge, it is all about the minute detail. If you miss _one tiny step_ anywhere along the way, it's going to break!
|
||||
|
||||
Note also that in developing this lab, it has been tested *many many* times! Once you have the VMs up and you start to build the cluster, if at any point something isn't working it is 99.9999% likely to be because you missed something, not a bug in the lab!
|
||||
|
||||
Always run the `cert_verify.sh` script at the places it suggests, and always ensure you are on the correct node when you do stuff. If `cert_verify.sh` shows anything in red, then you have made an error in a previous step. For the master node checks, run the check on `master-1` and on `master-2`
|
||||
Always run the `cert_verify.sh` script at the places it suggests, and always ensure you are on the correct node when you do stuff. If `cert_verify.sh` shows anything in red, then you have made an error in a previous step. For the controlplane node checks, run the check on `controlplane01` and on `controlplane02`
|
||||
|
||||
## Target Audience
|
||||
|
||||
|
@ -39,27 +39,12 @@ Kubernetes The Hard Way guides you through bootstrapping a highly available Kube
|
|||
|
||||
We will be building the following:
|
||||
|
||||
* Two control plane nodes (`master-1` and `master-2`) running the control plane components as operating system services.
|
||||
* Two worker nodes (`worker-1` and `worker-2`)
|
||||
* One loadbalancer VM running [HAProxy](https://www.haproxy.org/) to balance requests between the two API servers.
|
||||
* Two control plane nodes (`controlplane01` and `controlplane02`) running the control plane components as operating system services. This is not a kubeadm cluster as you are used to if you have been doing the CKA course. The control planes are *not* themselves nodes, therefore will not show with `kubectl get nodes`.
|
||||
* Two worker nodes (`node01` and `node02`)
|
||||
* One loadbalancer VM running [HAProxy](https://www.haproxy.org/) to balance requests between the two API servers and provide the endpoint for your KUBECONFIG.
|
||||
|
||||
## Labs
|
||||
## Getting Started
|
||||
|
||||
* If you are using Windows or Intel Mac, start [here](./VirtualBox/docs/01-prerequisites.md) to deploy VirtualBox and Vagrant.
|
||||
* If you are using Apple Silicon Mac (M1/M2/M3), start [here](./apple-silicon/docs/01-prerequisites.md) to deploy Multipass.
|
||||
|
||||
* [Prerequisites](docs/01-prerequisites.md)
|
||||
* [Provisioning Compute Resources](docs/02-compute-resources.md)
|
||||
* [Installing the Client Tools](docs/03-client-tools.md)
|
||||
* [Provisioning the CA and Generating TLS Certificates](docs/04-certificate-authority.md)
|
||||
* [Generating Kubernetes Configuration Files for Authentication](docs/05-kubernetes-configuration-files.md)
|
||||
* [Generating the Data Encryption Config and Key](docs/06-data-encryption-keys.md)
|
||||
* [Bootstrapping the etcd Cluster](docs/07-bootstrapping-etcd.md)
|
||||
* [Bootstrapping the Kubernetes Control Plane](docs/08-bootstrapping-kubernetes-controllers.md)
|
||||
* [Installing CRI on Worker Nodes](docs/09-install-cri-workers.md)
|
||||
* [Bootstrapping the Kubernetes Worker Nodes](docs/10-bootstrapping-kubernetes-workers.md)
|
||||
* [TLS Bootstrapping the Kubernetes Worker Nodes](docs/11-tls-bootstrapping-kubernetes-workers.md)
|
||||
* [Configuring kubectl for Remote Access](docs/12-configuring-kubectl.md)
|
||||
* [Deploy Weave - Pod Networking Solution](docs/13-configure-pod-networking.md)
|
||||
* [Kube API Server to Kubelet Configuration](docs/14-kube-apiserver-to-kubelet.md)
|
||||
* [Deploying the DNS Cluster Add-on](docs/15-dns-addon.md)
|
||||
* [Smoke Test](docs/16-smoke-test.md)
|
||||
* [E2E Test](docs/17-e2e-tests.md)
|
||||
* [Extra - Certificate Verification](docs/verify-certificates.md)
|
||||
|
|
|
@ -1,20 +1,29 @@
|
|||
# Prerequisites
|
||||
# Kubernetes The Hard Way on VirtualBox
|
||||
|
||||
## VM Hardware Requirements
|
||||
Begin here if your machine is Windows or Intel Mac. For these machines, we use VirtualBox as the hypervisor, and Vagrant to provision the Virtual Machines.
|
||||
|
||||
- 8 GB of RAM (preferably 16 GB)
|
||||
This should also work with Linux (as the host operating system, not running in a VM), but it not so far tested.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
||||
### Hardware Requirements
|
||||
|
||||
This lab provisions 5 VMs on your workstation. That's a lot of compute resource!
|
||||
|
||||
- 16GB RAM. It may work with less, but will be slow and may crash unexpectedly.
|
||||
- 8 core or better CPU e.g. Intel Core-i7/Core-i9, AMD Ryzen-7/Ryzen-9. May work with fewer, but will be slow and may crash unexpectedly.
|
||||
- 50 GB disk space
|
||||
|
||||
## VirtualBox
|
||||
### VirtualBox
|
||||
|
||||
Download and install [VirtualBox](https://www.virtualbox.org/wiki/Downloads) on any one of the supported platforms:
|
||||
|
||||
- Windows hosts
|
||||
- OS X hosts (x86 only, not Apple Silicon M-series)
|
||||
- Linux distributions
|
||||
- Solaris hosts
|
||||
- Windows
|
||||
- Intel Mac
|
||||
- Linux
|
||||
|
||||
## Vagrant
|
||||
### Vagrant
|
||||
|
||||
Once VirtualBox is installed you may chose to deploy virtual machines manually on it.
|
||||
Vagrant provides an easier way to deploy multiple virtual machines on VirtualBox more consistently.
|
||||
|
@ -22,17 +31,17 @@ Vagrant provides an easier way to deploy multiple virtual machines on VirtualBox
|
|||
Download and install [Vagrant](https://www.vagrantup.com/) on your platform.
|
||||
|
||||
- Windows
|
||||
- Debian
|
||||
- Centos
|
||||
- Debian/Ubuntu
|
||||
- CentOS
|
||||
- Linux
|
||||
- macOS (x86 only, not M1)
|
||||
- Intel Mac
|
||||
|
||||
This tutorial assumes that you have also installed Vagrant.
|
||||
|
||||
|
||||
## Lab Defaults
|
||||
### Lab Defaults
|
||||
|
||||
The labs have been configured with the following networking defaults. If you change any of these after you have deployed any of the lab, you'll need to completely reset it and start again from the beginning:
|
||||
The labs have been configured with the following networking defaults. It is not recommended to change these. If you change any of these after you have deployed any of the lab, you'll need to completely reset it and start again from the beginning:
|
||||
|
||||
```bash
|
||||
vagrant destroy -f
|
||||
|
@ -41,33 +50,33 @@ vagrant up
|
|||
|
||||
If you do change any of these, **please consider that a personal preference and don't submit a PR for it**.
|
||||
|
||||
### Virtual Machine Network
|
||||
#### Virtual Machine Network
|
||||
|
||||
The network used by the VirtualBox virtual machines is `192.168.56.0/24`.
|
||||
|
||||
To change this, edit the [Vagrantfile](../vagrant/Vagrantfile) in your cloned copy (do not edit directly in github), and set the new value for the network prefix at line 9. This should not overlap any of the other network settings.
|
||||
To change this, edit the [Vagrantfile](../../vagrant/Vagrantfile) in your cloned copy (do not edit directly in github), and set the new value for the network prefix at line 9. This should not overlap any of the other network settings.
|
||||
|
||||
Note that you do not need to edit any of the other scripts to make the above change. It is all managed by shell variable computations based on the assigned VM IP addresses and the values in the hosts file (also computed).
|
||||
|
||||
It is *recommended* that you leave the pod and service networks with the following defaults. If you change them then you will also need to edit one or both of the CoreDNS and Weave networking manifests to accommodate your change.
|
||||
|
||||
### Pod Network
|
||||
#### Pod Network
|
||||
|
||||
The network used to assign IP addresses to pods is `10.244.0.0/16`.
|
||||
|
||||
To change this, open all the `.md` files in the [docs](../docs/) directory in your favourite IDE and do a global replace on<br>
|
||||
To change this, open all the `.md` files in the [docs](../../docs/) directory in your favourite IDE and do a global replace on<br>
|
||||
`POD_CIDR=10.244.0.0/16`<br>
|
||||
with the new CDIR range. This should not overlap any of the other network settings.
|
||||
|
||||
### Service Network
|
||||
#### Service Network
|
||||
|
||||
The network used to assign IP addresses to Cluster IP services is `10.96.0.0/16`.
|
||||
|
||||
To change this, open all the `.md` files in the [docs](../docs/) directory in your favourite IDE and do a global replace on<br>
|
||||
To change this, open all the `.md` files in the [docs](../../docs/) directory in your favourite IDE and do a global replace on<br>
|
||||
`SERVICE_CIDR=10.96.0.0/16`<br>
|
||||
with the new CDIR range. This should not overlap any of the other network settings.
|
||||
|
||||
Additionally edit line 164 of [coredns.yaml](../deployments/coredns.yaml) to set the new DNS service address (should still end with `.10`)
|
||||
Additionally edit line 164 of [coredns.yaml](../../deployments/coredns.yaml) to set the new DNS service address (should still end with `.10`)
|
||||
|
||||
## Running Commands in Parallel with tmux
|
||||
|
||||
|
@ -75,7 +84,7 @@ Additionally edit line 164 of [coredns.yaml](../deployments/coredns.yaml) to set
|
|||
|
||||
> The use of tmux is optional and not required to complete this tutorial.
|
||||
|
||||

|
||||

|
||||
|
||||
> Enable synchronize-panes by pressing `CTRL+B` followed by `"` to split the window into two panes. In each pane (selectable with mouse), ssh to the host(s) you will be working with.</br>Next type `CTRL+X` at the prompt to begin sync. In sync mode, the dividing line between panes will be red. Everything you type or paste in one pane will be echoed in the other.<br>To disable synchronization type `CTRL+X` again.</br></br>Note that the `CTRL-X` key binding is provided by a `.tmux.conf` loaded onto the VM by the vagrant provisioner.
|
||||
|
|
@ -14,7 +14,7 @@ CD into vagrant directory:
|
|||
cd kubernetes-the-hard-way/vagrant
|
||||
```
|
||||
|
||||
The `Vagrantfile` is configured to assume you have at least an 8 core CPU which most modern core i5, i7 and i9 do, and at least 16GB RAM. You can tune these values expecially if you have *less* than this by editing the `Vagrantfile` before the next step below and adjusting the values for `RAM_SIZE` and `CPU_CORES` accordingly.
|
||||
The `Vagrantfile` is configured to assume you have at least an 8 core CPU which most modern core i5, i7 and i9 do, and at least 16GB RAM. You can tune these values especially if you have *less* than this by editing the `Vagrantfile` before the next step below and adjusting the values for `RAM_SIZE` and `CPU_CORES` accordingly. It is not recommended to change these unless you know what you are doing as it may result in crashes and will make the lab harder to support.
|
||||
|
||||
This will not work if you have less than 8GB of RAM.
|
||||
|
||||
|
@ -27,7 +27,7 @@ vagrant up
|
|||
|
||||
This does the below:
|
||||
|
||||
- Deploys 5 VMs - 2 Master, 2 Worker and 1 Loadbalancer with the name 'kubernetes-ha-* '
|
||||
- Deploys 5 VMs - 2 controlplane, 2 worker and 1 loadbalancer with the name 'kubernetes-ha-* '
|
||||
> This is the default settings. This can be changed at the top of the Vagrant file.
|
||||
> If you choose to change these settings, please also update `vagrant/ubuntu/vagrant/setup-hosts.sh`
|
||||
> to add the additional hosts to the `/etc/hosts` default before running `vagrant up`.
|
||||
|
@ -36,10 +36,10 @@ This does the below:
|
|||
|
||||
| VM | VM Name | Purpose | IP | Forwarded Port | RAM |
|
||||
| ------------ | ---------------------- |:-------------:| -------------:| ----------------:|-----:|
|
||||
| master-1 | kubernetes-ha-master-1 | Master | 192.168.56.11 | 2711 | 2048 |
|
||||
| master-2 | kubernetes-ha-master-2 | Master | 192.168.56.12 | 2712 | 1024 |
|
||||
| worker-1 | kubernetes-ha-worker-1 | Worker | 192.168.56.21 | 2721 | 512 |
|
||||
| worker-2 | kubernetes-ha-worker-2 | Worker | 192.168.56.22 | 2722 | 1024 |
|
||||
| controlplane01 | kubernetes-ha-controlplane01 | Master | 192.168.56.11 | 2711 | 2048 |
|
||||
| controlplane02 | kubernetes-ha-controlplane02 | Master | 192.168.56.12 | 2712 | 1024 |
|
||||
| node01 | kubernetes-ha-node01 | Worker | 192.168.56.21 | 2721 | 512 |
|
||||
| node02 | kubernetes-ha-node02 | Worker | 192.168.56.22 | 2722 | 1024 |
|
||||
| loadbalancer | kubernetes-ha-lb | LoadBalancer | 192.168.56.30 | 2730 | 1024 |
|
||||
|
||||
> These are the default settings. These can be changed in the Vagrant file
|
||||
|
@ -49,7 +49,7 @@ This does the below:
|
|||
|
||||
- Sets required kernel settings for kubernetes networking to function correctly.
|
||||
|
||||
See [Vagrant page](../vagrant/README.md) for details.
|
||||
See [Vagrant page](../../vagrant/README.md) for details.
|
||||
|
||||
## SSH to the nodes
|
||||
|
||||
|
@ -57,7 +57,7 @@ There are two ways to SSH into the nodes:
|
|||
|
||||
### 1. SSH using Vagrant
|
||||
|
||||
From the directory you ran the `vagrant up` command, run `vagrant ssh \<vm\>` for example `vagrant ssh master-1`.
|
||||
From the directory you ran the `vagrant up` command, run `vagrant ssh \<vm\>` for example `vagrant ssh controlplane01`. This is the recommended way.
|
||||
> Note: Use VM field from the above table and not the VM name itself.
|
||||
|
||||
### 2. SSH Using SSH Client Tools
|
||||
|
@ -100,15 +100,15 @@ Sometimes the delete does not delete the folder created for the VM and throws an
|
|||
|
||||
VirtualBox error:
|
||||
|
||||
VBoxManage.exe: error: Could not rename the directory 'D:\VirtualBox VMs\ubuntu-bionic-18.04-cloudimg-20190122_1552891552601_76806' to 'D:\VirtualBox VMs\kubernetes-ha-worker-2' to save the settings file (VERR_ALREADY_EXISTS)
|
||||
VBoxManage.exe: error: Could not rename the directory 'D:\VirtualBox VMs\ubuntu-bionic-18.04-cloudimg-20190122_1552891552601_76806' to 'D:\VirtualBox VMs\kubernetes-ha-node02' to save the settings file (VERR_ALREADY_EXISTS)
|
||||
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component SessionMachine, interface IMachine, callee IUnknown
|
||||
VBoxManage.exe: error: Context: "SaveSettings()" at line 3105 of file VBoxManageModifyVM.cpp
|
||||
|
||||
In such cases delete the VM, then delete the VM folder and then re-provision, e.g.
|
||||
|
||||
```bash
|
||||
vagrant destroy worker-2
|
||||
rmdir "\<path-to-vm-folder\>\kubernetes-ha-worker-2
|
||||
vagrant destroy node02
|
||||
rmdir "\<path-to-vm-folder\>\kubernetes-ha-node02
|
||||
vagrant up
|
||||
```
|
||||
|
||||
|
@ -137,5 +137,5 @@ To power on again:
|
|||
vagrant up
|
||||
```
|
||||
|
||||
Prev: [Prerequisites](01-prerequisites.md)<br>
|
||||
Next: [Client tools](03-client-tools.md)
|
||||
Next: [Client tools](../../docs/03-client-tools.md)<br>
|
||||
Prev: [Prerequisites](01-prerequisites.md)
|
|
@ -0,0 +1,33 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
set -eo pipefail
|
||||
|
||||
specs=/tmp/vm-specs
|
||||
cat <<EOF > $specs
|
||||
controlplane01,2,2048M,10G
|
||||
controlplane02,2,2048M,5G
|
||||
loadbalancer,1,512M,5G
|
||||
node01,2,2048M,5G
|
||||
node02,2,2048M,5G
|
||||
EOF
|
||||
|
||||
for spec in $(cat $specs)
|
||||
do
|
||||
n=$(cut -d ',' -f 1 <<< $spec)
|
||||
multipass stop $n
|
||||
multipass delete $n
|
||||
done
|
||||
|
||||
multipass purge
|
||||
|
||||
echo
|
||||
echo "You should now remove all the following lines from /var/db/dhcpd_leases"
|
||||
echo
|
||||
cat /var/db/dhcpd_leases | egrep -A 5 -B 1 '(controlplane|node|loadbalancer)'
|
||||
echo
|
||||
cat <<EOF
|
||||
Use the following command to do this
|
||||
|
||||
sudo vi /var/db/dhcpd_leases
|
||||
|
||||
EOF
|
|
@ -0,0 +1,104 @@
|
|||
#!/usr/bin/env bash
|
||||
# When VMs are deleted, IPs remain allocated in dhcpdb
|
||||
# IP reclaim: https://discourse.ubuntu.com/t/is-it-possible-to-either-specify-an-ip-address-on-launch-or-reset-the-next-ip-address-to-be-used/30316
|
||||
|
||||
ARG=$1
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
RED="\033[1;31m"
|
||||
YELLOW="\033[1;33m"
|
||||
GREEN="\033[1;32m"
|
||||
BLUE="\033[1;34m"
|
||||
NC="\033[0m"
|
||||
|
||||
echo -e "${BLUE}Checking system compatibility${NC}"
|
||||
|
||||
MEM_GB=$(( $(sysctl hw.memsize | cut -d ' ' -f 2) / 1073741824 ))
|
||||
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )/scripts
|
||||
|
||||
if [ $MEM_GB -lt 12 ]
|
||||
then
|
||||
echo -e "${RED}System RAM is ${MEM_GB}GB. This is insufficient to deploy a working cluster.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v multipass > /dev/null
|
||||
then
|
||||
echo -e "${RED}Cannot find multipass. Did you install it as per the instructions?${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v jq > /dev/null
|
||||
then
|
||||
echo -e "${RED}Cannot find jq. Did you install it as per the instructions?${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
specs=/tmp/vm-specs
|
||||
cat <<EOF > $specs
|
||||
controlplane01,2,2048M,10G
|
||||
controlplane02,2,2048M,5G
|
||||
loadbalancer,1,512M,5G
|
||||
node01,2,2048M,5G
|
||||
node02,2,2048M,5G
|
||||
EOF
|
||||
|
||||
echo -e "${GREEN}System OK!${NC}"
|
||||
|
||||
# If the nodes are running, reset them
|
||||
for spec in $(cat $specs)
|
||||
do
|
||||
node=$(cut -d ',' -f 1 <<< $spec)
|
||||
if multipass list --format json | jq -r '.list[].name' | grep $node > /dev/null
|
||||
then
|
||||
echo -n -e $RED
|
||||
read -p "VMs are running. Delete and rebuild them (y/n)? " ans
|
||||
echo -n -e $NC
|
||||
[ "$ans" != 'y' ] && exit 1
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
# Boot the nodes
|
||||
for spec in $(cat $specs)
|
||||
do
|
||||
node=$(cut -d ',' -f 1 <<< $spec)
|
||||
cpus=$(cut -d ',' -f 2 <<< $spec)
|
||||
ram=$(cut -d ',' -f 3 <<< $spec)
|
||||
disk=$(cut -d ',' -f 4 <<< $spec)
|
||||
if multipass list --format json | jq -r '.list[].name' | grep $(cut -d ',' -f 1 <<< $node) > /dev/null
|
||||
then
|
||||
echo -e "${YELLOW}Deleting $node${NC}"
|
||||
multipass delete $node
|
||||
multipass purge
|
||||
fi
|
||||
|
||||
echo -e "${BLUE}Launching ${node}${NC}"
|
||||
multipass launch --disk $disk --memory $ram --cpus $cpus --name $node jammy
|
||||
echo -e "${GREEN}$node booted!${NC}"
|
||||
done
|
||||
|
||||
# Create hostfile entries
|
||||
echo -e "${BLUE}Provisioning...${NC}"
|
||||
hostentries=/tmp/hostentries
|
||||
|
||||
[ -f $hostentries ] && rm -f $hostentries
|
||||
|
||||
for spec in $(cat $specs)
|
||||
do
|
||||
node=$(cut -d ',' -f 1 <<< $spec)
|
||||
ip=$(multipass info $node --format json | jq -r 'first( .info[] | .ipv4[0] )')
|
||||
echo "$ip $node" >> $hostentries
|
||||
done
|
||||
|
||||
for spec in $(cat $specs)
|
||||
do
|
||||
node=$(cut -d ',' -f 1 <<< $spec)
|
||||
multipass transfer $hostentries $node:/tmp/
|
||||
multipass transfer $SCRIPT_DIR/01-setup-hosts.sh $node:/tmp/
|
||||
multipass transfer $SCRIPT_DIR/cert_verify.sh $node:/home/ubuntu/
|
||||
multipass exec $node -- /tmp/01-setup-hosts.sh
|
||||
done
|
||||
|
||||
echo -e "${GREEN}Done!${NC}"
|
|
@ -0,0 +1,64 @@
|
|||
# Prerequisites
|
||||
|
||||
## Hardware Requirements
|
||||
|
||||
This lab provisions 5 VMs on your workstation. That's a lot of compute resource!
|
||||
|
||||
* Apple Silicon System (M1/M2/M3 etc)
|
||||
* Minimum 16GB RAM.<br/>Bear in mind that the unified memory architecture of Apple Silicon Macs means that the whole of the quoted memory is not available for software - some of it is used for the display, more if you have external displays.<br/>This rules out 8GB models - sorry.
|
||||
* Pro or Max CPU recommended for running the e2e-tests at the end of this lab.
|
||||
|
||||
## Required Software
|
||||
|
||||
You'll need to install the following first.
|
||||
|
||||
* Multipass - https://multipass.run/install. Follow the instructions to install it and check it is working properly. You should be able to successfully create a test Ubuntu VM following their instructions. Delete the test VM when you're done.
|
||||
* JQ - https://github.com/stedolan/jq/wiki/Installation#macos
|
||||
|
||||
Additionally
|
||||
|
||||
* Your account on your Mac must have admin privilege and be able to use `sudo`
|
||||
|
||||
Clone this repo down to your Mac. Open your Mac's terminal application. All commands in this guide are executed from the terminal.
|
||||
|
||||
```bash
|
||||
mkdir ~/kodekloud
|
||||
cd ~/kodekloud
|
||||
git clone https://github.com/mmumshad/kubernetes-the-hard-way.git
|
||||
cd kubernetes-the-hard-way/apple-silicon
|
||||
```
|
||||
|
||||
## Virtual Machine Network
|
||||
|
||||
Due to how the virtualization works, the networking for each VM requires two network adapters; one used by Multipass and one used by everything else. Kubernetes components may by default bind to the Multipass adapter, which is *not* what we want, therefore we have pre-set an environment variable `PRIMARY_IP` on all VMs which is the IP address that Kubernetes components should be using. In the coming labs you will see this environment variable being used to ensure Kubernetes components bind to the correct network interface.
|
||||
|
||||
`PRIMARY_IP` is defined as the IP address of the network interface on the node that is connected to the network having the default gateway, and is the interface that a node will use to talk to the other nodes.
|
||||
|
||||
### NAT Networking
|
||||
|
||||
In NAT configuration, the network on which the VMs run is isolated from your broadband router's network by a NAT gateway managed by the hypervisor. This means that VMs can see out (and connect to Internet), but you can't see in (i.e. use browser to connect to NodePorts). It is currently not possible to set up port forwarding rules in Multipass to facilitate this.
|
||||
|
||||
The network used by the VMs is chosen by Multipass.
|
||||
|
||||
It is *recommended* that you leave the pod and service networks as the defaults. If you change them then you will also need to edit the Weave networking manifests to accommodate your change.
|
||||
|
||||
If you do decide to change any of these, please treat as personal preference and do not raise a pull request.
|
||||
|
||||
|
||||
## Running Commands in Parallel with iterm2
|
||||
|
||||
[iterm2](https://iterm2.com/) which is a popular replacement for the standard Mac terminal application can be used to run the same commands on multiple compute instances at the same time. Some labs in this tutorial require running the same commands on multiple compute instances for instance installing the Kubernetes software. In those cases you may consider using iterm2 and splitting a window into multiple panes with *Broadcast input to all panes* enabled to speed up the provisioning process.
|
||||
|
||||
*The use of iterm2 is optional and not required to complete this tutorial*.
|
||||
|
||||

|
||||
|
||||
To set up as per the image above, do the following in iterm2
|
||||
1. Right click and select split pane horizontally
|
||||
1. In each pane, connect to a different node with `Multipass shell`
|
||||
1. From the `Session` menu at the top, toggle `Broadcast` -> `Broadcast input to all panes` (or press `ALT`-`CMD`-`I`). The small icon at the top right of each pane indicates broadcast mode is enabled.
|
||||
|
||||
Input typed or passed to one command prompt will be echoed to the others. Remember to turn off broadcast when you have finished a section that applies to multiple nodes.
|
||||
|
||||
Next: [Compute Resources](02-compute-resources.md)
|
||||
|
|
@ -0,0 +1,60 @@
|
|||
# Compute Resources
|
||||
|
||||
Because we cannot use VirtualBox and are instead using Multipass, [a script is provided](./deploy-virtual-machines.sh) to create the three VMs.
|
||||
|
||||
1. Run the VM deploy script from your Mac terminal application
|
||||
|
||||
```bash
|
||||
./deploy-virtual-machines.sh
|
||||
```
|
||||
|
||||
2. Verify you can connect to all VMs:
|
||||
|
||||
```bash
|
||||
multipass shell controlplane01
|
||||
```
|
||||
|
||||
You should see a command prompt like `ubuntu@controlplane01:~$`
|
||||
|
||||
Type the following to return to the Mac terminal
|
||||
|
||||
```bash
|
||||
exit
|
||||
```
|
||||
|
||||
Do this for the other controlplane, both nodes and loadbalancer.
|
||||
|
||||
# Deleting the Virtual Machines
|
||||
|
||||
When you have finished with your cluster and want to reclaim the resources, perform the following steps
|
||||
|
||||
1. Exit from all your VM sessions
|
||||
1. Run the [delete script](../delete-virtual-machines.sh) from your Mac terminal application
|
||||
|
||||
```bash
|
||||
./delete-virtual-machines.sh
|
||||
````
|
||||
|
||||
1. Clean stale DHCP leases. Multipass does not do this automatically and if you do not do it yourself you will eventually run out of IP addresses on the multipass VM network.
|
||||
|
||||
1. Edit the following
|
||||
|
||||
```bash
|
||||
sudo vi /var/db/dhcpd_leases
|
||||
```
|
||||
|
||||
1. Remove all blocks that look like this, specifically those with `name` like `controlplane`, `node` or `loadbalancer`
|
||||
```text
|
||||
{
|
||||
name=controlplane01
|
||||
ip_address=192.168.64.4
|
||||
hw_address=1,52:54:0:78:4d:ff
|
||||
identifier=1,52:54:0:78:4d:ff
|
||||
lease=0x65dc3134
|
||||
}
|
||||
```
|
||||
|
||||
1. Save the file and exit
|
||||
|
||||
Next: [Client tools](../../docs/03-client-tools.md)<br>
|
||||
Prev: [Prerequisites](./01-prerequisites.md)
|
|
@ -0,0 +1,27 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
# Set hostfile entries
|
||||
sudo sed -i "/$(hostname)/d" /etc/hosts
|
||||
cat /tmp/hostentries | sudo tee -a /etc/hosts &> /dev/null
|
||||
|
||||
# Export internal IP of primary NIC as an environment variable
|
||||
echo "PRIMARY_IP=$(ip route | grep default | awk '{ print $9 }')" | sudo tee -a /etc/environment > /dev/null
|
||||
|
||||
# Export architecture as environment variable to download correct versions of software
|
||||
echo "ARCH=arm64" | sudo tee -a /etc/environment > /dev/null
|
||||
|
||||
# Enable password auth in sshd so we can use ssh-copy-id
|
||||
# Enable password auth in sshd so we can use ssh-copy-id
|
||||
sudo sed -i --regexp-extended 's/#?PasswordAuthentication (yes|no)/PasswordAuthentication yes/' /etc/ssh/sshd_config
|
||||
sudo sed -i --regexp-extended 's/#?Include \/etc\/ssh\/sshd_config.d\/\*.conf/#Include \/etc\/ssh\/sshd_config.d\/\*.conf/' /etc/ssh/sshd_config
|
||||
sudo sed -i 's/KbdInteractiveAuthentication no/KbdInteractiveAuthentication yes/' /etc/ssh/sshd_config
|
||||
sudo systemctl restart sshd
|
||||
|
||||
if [ "$(hostname)" = "controlplane01" ]
|
||||
then
|
||||
sh -c 'sudo apt update' &> /dev/null
|
||||
sh -c 'sudo apt-get install -y sshpass' &> /dev/null
|
||||
fi
|
||||
|
||||
# Set password for ubuntu user (it's something random by default)
|
||||
echo 'ubuntu:ubuntu' | sudo chpasswd
|
|
@ -0,0 +1,23 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
# Step 2 - Set up Operating System Prerequisites
|
||||
|
||||
# Load required kernel modules
|
||||
sudo modprobe overlay
|
||||
sudo modprobe br_netfilter
|
||||
|
||||
# Persist modules between restarts
|
||||
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
|
||||
overlay
|
||||
br_netfilter
|
||||
EOF
|
||||
|
||||
# Set required networking parameters
|
||||
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
EOF
|
||||
|
||||
# Apply sysctl params without reboot
|
||||
sudo sysctl --system
|
|
@ -0,0 +1,578 @@
|
|||
#!/bin/bash
|
||||
set -e
|
||||
#set -x
|
||||
|
||||
# Green & Red marking for Success and Failed messages
|
||||
SUCCESS='\033[0;32m'
|
||||
FAILED='\033[0;31;1m'
|
||||
NC='\033[0m'
|
||||
|
||||
# IP addresses
|
||||
PRIMARY_IP=$(ip route | grep default | awk '{ print $9 }')
|
||||
CONTROL01=$(dig +short controlplane01)
|
||||
CONTROL02=$(dig +short controlplane02)
|
||||
NODE01=$(dig +short node01)
|
||||
NODE02=$(dig +short node02)
|
||||
LOADBALANCER=$(dig +short loadbalancer)
|
||||
LOCALHOST="127.0.0.1"
|
||||
|
||||
# All Cert Location
|
||||
# ca certificate location
|
||||
CACERT=ca.crt
|
||||
CAKEY=ca.key
|
||||
|
||||
# Kube controller manager certificate location
|
||||
KCMCERT=kube-controller-manager.crt
|
||||
KCMKEY=kube-controller-manager.key
|
||||
|
||||
# Kube proxy certificate location
|
||||
KPCERT=kube-proxy.crt
|
||||
KPKEY=kube-proxy.key
|
||||
|
||||
# Kube scheduler certificate location
|
||||
KSCERT=kube-scheduler.crt
|
||||
KSKEY=kube-scheduler.key
|
||||
|
||||
# Kube api certificate location
|
||||
APICERT=kube-apiserver.crt
|
||||
APIKEY=kube-apiserver.key
|
||||
|
||||
# ETCD certificate location
|
||||
ETCDCERT=etcd-server.crt
|
||||
ETCDKEY=etcd-server.key
|
||||
|
||||
# Service account certificate location
|
||||
SACERT=service-account.crt
|
||||
SAKEY=service-account.key
|
||||
|
||||
# All kubeconfig locations
|
||||
|
||||
# kubeproxy.kubeconfig location
|
||||
KPKUBECONFIG=kube-proxy.kubeconfig
|
||||
|
||||
# kube-controller-manager.kubeconfig location
|
||||
KCMKUBECONFIG=kube-controller-manager.kubeconfig
|
||||
|
||||
# kube-scheduler.kubeconfig location
|
||||
KSKUBECONFIG=kube-scheduler.kubeconfig
|
||||
|
||||
# admin.kubeconfig location
|
||||
ADMINKUBECONFIG=admin.kubeconfig
|
||||
|
||||
# All systemd service locations
|
||||
|
||||
# etcd systemd service
|
||||
SYSTEMD_ETCD_FILE=/etc/systemd/system/etcd.service
|
||||
|
||||
# kub-api systemd service
|
||||
SYSTEMD_API_FILE=/etc/systemd/system/kube-apiserver.service
|
||||
|
||||
# kube-controller-manager systemd service
|
||||
SYSTEMD_KCM_FILE=/etc/systemd/system/kube-controller-manager.service
|
||||
|
||||
# kube-scheduler systemd service
|
||||
SYSTEMD_KS_FILE=/etc/systemd/system/kube-scheduler.service
|
||||
|
||||
### WORKER NODES ###
|
||||
|
||||
# Worker-1 cert details
|
||||
NODE01_CERT=/var/lib/kubelet/node01.crt
|
||||
NODE01_KEY=/var/lib/kubelet/node01.key
|
||||
|
||||
# Worker-1 kubeconfig location
|
||||
NODE01_KUBECONFIG=/var/lib/kubelet/kubeconfig
|
||||
|
||||
# Worker-1 kubelet config location
|
||||
NODE01_KUBELET=/var/lib/kubelet/kubelet-config.yaml
|
||||
|
||||
# Systemd node01 kubelet location
|
||||
SYSTEMD_NODE01_KUBELET=/etc/systemd/system/kubelet.service
|
||||
|
||||
# kube-proxy node01 location
|
||||
NODE01_KP_KUBECONFIG=/var/lib/kube-proxy/kubeconfig
|
||||
SYSTEMD_NODE01_KP=/etc/systemd/system/kube-proxy.service
|
||||
|
||||
|
||||
# Function - Master node #
|
||||
|
||||
check_cert_and_key()
|
||||
{
|
||||
local name=$1
|
||||
local subject=$2
|
||||
local issuer=$3
|
||||
local nokey=
|
||||
local cert="${CERT_LOCATION}/$1.crt"
|
||||
local key="${CERT_LOCATION}/$1.key"
|
||||
|
||||
if [ -z $cert -o -z $key ]
|
||||
then
|
||||
printf "${FAILED}cert and/or key not present in ${CERT_LOCATION}. Perhaps you missed a copy step\n${NC}"
|
||||
exit 1
|
||||
elif [ -f $cert -a -f $key ]
|
||||
then
|
||||
printf "${NC}${name} cert and key found, verifying the authenticity\n"
|
||||
CERT_SUBJECT=$(sudo openssl x509 -in $cert -text | grep "Subject: CN"| tr -d " ")
|
||||
CERT_ISSUER=$(sudo openssl x509 -in $cert -text | grep "Issuer: CN"| tr -d " ")
|
||||
CERT_MD5=$(sudo openssl x509 -noout -modulus -in $cert | openssl md5| awk '{print $2}')
|
||||
KEY_MD5=$(sudo openssl rsa -noout -modulus -in $key | openssl md5| awk '{print $2}')
|
||||
if [ $CERT_SUBJECT == "${subject}" ] && [ $CERT_ISSUER == "${issuer}" ] && [ $CERT_MD5 == $KEY_MD5 ]
|
||||
then
|
||||
printf "${SUCCESS}${name} cert and key are correct\n${NC}"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the ${name} certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#certificate-authority\n${NC}"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}${cert} / ${key} is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#certificate-authority\n"
|
||||
echo "These should be in /var/lib/kubernetes/pki (most certs), /etc/etcd (eccd server certs) or /var/lib/kubelet (kubelet certs)${NC}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cert_only()
|
||||
{
|
||||
local name=$1
|
||||
local subject=$2
|
||||
local issuer=$3
|
||||
local cert="${CERT_LOCATION}/$1.crt"
|
||||
|
||||
# Worker-2 auto cert is a .pem
|
||||
[ -f "${CERT_LOCATION}/$1.pem" ] && cert="${CERT_LOCATION}/$1.pem"
|
||||
|
||||
if [ -z $cert ]
|
||||
then
|
||||
printf "${FAILED}cert not present in ${CERT_LOCATION}. Perhaps you missed a copy step\n${NC}"
|
||||
exit 1
|
||||
elif [ -f $cert ]
|
||||
then
|
||||
printf "${NC}${name} cert found, verifying the authenticity\n"
|
||||
CERT_SUBJECT=$(sudo openssl x509 -in $cert -text | grep "Subject: "| tr -d " ")
|
||||
CERT_ISSUER=$(sudo openssl x509 -in $cert -text | grep "Issuer: CN"| tr -d " ")
|
||||
CERT_MD5=$(sudo openssl x509 -noout -modulus -in $cert | openssl md5| awk '{print $2}')
|
||||
if [ $CERT_SUBJECT == "${subject}" ] && [ $CERT_ISSUER == "${issuer}" ]
|
||||
then
|
||||
printf "${SUCCESS}${name} cert is correct\n${NC}"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the ${name} certificate, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#certificate-authority\n${NC}"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
if [[ $cert == *kubelet-client-current* ]]
|
||||
then
|
||||
printf "${FAILED}${cert} missing. This probably means that kubelet failed to start.${NC}\n"
|
||||
echo -e "Check logs with\n\n sudo journalctl -u kubelet\n"
|
||||
else
|
||||
printf "${FAILED}${cert} missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#certificate-authority\n${NC}"
|
||||
echo "These should be in ${CERT_LOCATION}"
|
||||
fi
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cert_adminkubeconfig()
|
||||
{
|
||||
if [ -z $ADMINKUBECONFIG ]
|
||||
then
|
||||
printf "${FAILED}please specify admin kubeconfig location\n${NC}"
|
||||
exit 1
|
||||
elif [ -f $ADMINKUBECONFIG ]
|
||||
then
|
||||
printf "${NC}admin kubeconfig file found, verifying the authenticity\n"
|
||||
ADMINKUBECONFIG_SUBJECT=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | sudo openssl x509 -text | grep "Subject: CN" | tr -d " ")
|
||||
ADMINKUBECONFIG_ISSUER=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | sudo openssl x509 -text | grep "Issuer: CN" | tr -d " ")
|
||||
ADMINKUBECONFIG_CERT_MD5=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | sudo openssl x509 -noout | openssl md5 | awk '{print $2}')
|
||||
ADMINKUBECONFIG_KEY_MD5=$(cat $ADMINKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
||||
ADMINKUBECONFIG_SERVER=$(cat $ADMINKUBECONFIG | grep "server:"| awk '{print $2}')
|
||||
if [ $ADMINKUBECONFIG_SUBJECT == "Subject:CN=admin,O=system:masters" ] && [ $ADMINKUBECONFIG_ISSUER == "Issuer:CN=KUBERNETES-CA,O=Kubernetes" ] && [ $ADMINKUBECONFIG_CERT_MD5 == $ADMINKUBECONFIG_KEY_MD5 ] && [ $ADMINKUBECONFIG_SERVER == "https://127.0.0.1:6443" ]
|
||||
then
|
||||
printf "${SUCCESS}admin kubeconfig cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the admin kubeconfig certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md#the-admin-kubernetes-configuration-file\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}admin kubeconfig file is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md#the-admin-kubernetes-configuration-file\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
get_kubeconfig_cert_path()
|
||||
{
|
||||
local kubeconfig=$1
|
||||
local cert_field=$2
|
||||
|
||||
sudo cat $kubeconfig | grep cert_field | awk '{print $2}'
|
||||
}
|
||||
|
||||
check_kubeconfig()
|
||||
{
|
||||
local name=$1
|
||||
local location=$2
|
||||
local apiserver=$3
|
||||
local kubeconfig="${location}/${name}.kubeconfig"
|
||||
|
||||
echo "Checking $kubeconfig"
|
||||
check_kubeconfig_exists $name $location
|
||||
ca=$(get_kubeconfig_cert_path $kubeconfig "certificate-authority")
|
||||
cert=$(get_kubeconfig_cert_path $kubeconfig "client-certificate")
|
||||
key=$(get_kubeconfig_cert_path $kubeconfig "client-key")
|
||||
server=$(sudo cat $kubeconfig | grep server | awk '{print $2}')
|
||||
|
||||
if [ -f "$ca"]
|
||||
then
|
||||
printf "${SUCCESS}Path to CA certificate is correct${NC}\n"
|
||||
else
|
||||
printf "${FAIL}CA certificate not found at ${ca}${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -f "$cert"]
|
||||
then
|
||||
printf "${SUCCESS}Path to client certificate is correct${NC}\n"
|
||||
else
|
||||
printf "${FAIL}Client certificate not found at ${cert}${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -f "$key"]
|
||||
then
|
||||
printf "${SUCCESS}Path to client key is correct${NC}\n"
|
||||
else
|
||||
printf "${FAIL}Client key not found at ${key}${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$apiserver" = "$server" ]
|
||||
then
|
||||
printf "${SUCCESS}Server URL is correct${NC}\n"
|
||||
else
|
||||
printf "${FAIL}Server URL ${server} is incorrect${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_kubeconfig_exists() {
|
||||
local name=$1
|
||||
local location=$2
|
||||
local kubeconfig="${location}/${name}.kubeconfig"
|
||||
|
||||
if [ -f "${kubeconfig}" ]
|
||||
then
|
||||
printf "${SUCCESS}${kubeconfig} found${NC}\n"
|
||||
else
|
||||
printf "${FAIL}${kubeconfig} not found!${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_systemd_etcd()
|
||||
{
|
||||
if [ -z $ETCDCERT ] && [ -z $ETCDKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify ETCD cert and key location, Exiting....\n${NC}"
|
||||
exit 1
|
||||
elif [ -f $SYSTEMD_ETCD_FILE ]
|
||||
then
|
||||
printf "${NC}Systemd for ETCD service found, verifying the authenticity\n"
|
||||
|
||||
# Systemd cert and key file details
|
||||
ETCD_CA_CERT=ca.crt
|
||||
CERT_FILE=$(systemctl cat etcd.service | grep "\--cert-file"| awk '{print $1}'| cut -d "=" -f2)
|
||||
KEY_FILE=$(systemctl cat etcd.service | grep "\--key-file"| awk '{print $1}' | cut -d "=" -f2)
|
||||
PEER_CERT_FILE=$(systemctl cat etcd.service | grep "\--peer-cert-file"| awk '{print $1}'| cut -d "=" -f2)
|
||||
PEER_KEY_FILE=$(systemctl cat etcd.service | grep "\--peer-key-file"| awk '{print $1}'| cut -d "=" -f2)
|
||||
TRUSTED_CA_FILE=$(systemctl cat etcd.service | grep "\--trusted-ca-file"| awk '{print $1}'| cut -d "=" -f2)
|
||||
PEER_TRUSTED_CA_FILE=$(systemctl cat etcd.service | grep "\--peer-trusted-ca-file"| awk '{print $1}'| cut -d "=" -f2)
|
||||
|
||||
# Systemd advertise , client and peer url's
|
||||
|
||||
IAP_URL=$(systemctl cat etcd.service | grep "\--initial-advertise-peer-urls"| awk '{print $2}')
|
||||
LP_URL=$(systemctl cat etcd.service | grep "\--listen-peer-urls"| awk '{print $2}')
|
||||
LC_URL=$(systemctl cat etcd.service | grep "\--listen-client-urls"| awk '{print $2}')
|
||||
AC_URL=$(systemctl cat etcd.service | grep "\--advertise-client-urls"| awk '{print $2}')
|
||||
|
||||
|
||||
ETCD_CA_CERT=/etc/etcd/ca.crt
|
||||
ETCDCERT=/etc/etcd/etcd-server.crt
|
||||
ETCDKEY=/etc/etcd/etcd-server.key
|
||||
if [ $CERT_FILE == $ETCDCERT ] && [ $KEY_FILE == $ETCDKEY ] && [ $PEER_CERT_FILE == $ETCDCERT ] && [ $PEER_KEY_FILE == $ETCDKEY ] && \
|
||||
[ $TRUSTED_CA_FILE == $ETCD_CA_CERT ] && [ $PEER_TRUSTED_CA_FILE = $ETCD_CA_CERT ]
|
||||
then
|
||||
printf "${SUCCESS}ETCD certificate, ca and key files are correct under systemd service\n${NC}"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the ETCD certificate, ca and keys. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md#configure-the-etcd-server\n${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ $IAP_URL == "https://$PRIMARY_IP:2380" ] && [ $LP_URL == "https://$PRIMARY_IP:2380" ] && [ $LC_URL == "https://$PRIMARY_IP:2379,https://127.0.0.1:2379" ] && \
|
||||
[ $AC_URL == "https://$PRIMARY_IP:2379" ]
|
||||
then
|
||||
printf "${SUCCESS}ETCD initial-advertise-peer-urls, listen-peer-urls, listen-client-urls, advertise-client-urls are correct\n${NC}"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the ETCD initial-advertise-peer-urls / listen-peer-urls / listen-client-urls / advertise-client-urls. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md#configure-the-etcd-server\n${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
else
|
||||
printf "${FAILED}etcd-server.crt / etcd-server.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md#configure-the-etcd-server\n${NC}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_systemd_api()
|
||||
{
|
||||
if [ -z $APICERT ] && [ -z $APIKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify kube-api cert and key location, Exiting....\n${NC}"
|
||||
exit 1
|
||||
elif [ -f $SYSTEMD_API_FILE ]
|
||||
then
|
||||
printf "Systemd for kube-api service found, verifying the authenticity\n"
|
||||
|
||||
ADVERTISE_ADDRESS=$(systemctl cat kube-apiserver.service | grep "\--advertise-address" | awk '{print $1}' | cut -d "=" -f2)
|
||||
CLIENT_CA_FILE=$(systemctl cat kube-apiserver.service | grep "\--client-ca-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
ETCD_CA_FILE=$(systemctl cat kube-apiserver.service | grep "\--etcd-cafile" | awk '{print $1}' | cut -d "=" -f2)
|
||||
ETCD_CERT_FILE=$(systemctl cat kube-apiserver.service | grep "\--etcd-certfile" | awk '{print $1}' | cut -d "=" -f2)
|
||||
ETCD_KEY_FILE=$(systemctl cat kube-apiserver.service | grep "\--etcd-keyfile" | awk '{print $1}' | cut -d "=" -f2)
|
||||
KUBELET_CERTIFICATE_AUTHORITY=$(systemctl cat kube-apiserver.service | grep "\--kubelet-certificate-authority" | awk '{print $1}' | cut -d "=" -f2)
|
||||
KUBELET_CLIENT_CERTIFICATE=$(systemctl cat kube-apiserver.service | grep "\--kubelet-client-certificate" | awk '{print $1}' | cut -d "=" -f2)
|
||||
KUBELET_CLIENT_KEY=$(systemctl cat kube-apiserver.service | grep "\--kubelet-client-key" | awk '{print $1}' | cut -d "=" -f2)
|
||||
SERVICE_ACCOUNT_KEY_FILE=$(systemctl cat kube-apiserver.service | grep "\--service-account-key-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
TLS_CERT_FILE=$(systemctl cat kube-apiserver.service | grep "\--tls-cert-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
TLS_PRIVATE_KEY_FILE=$(systemctl cat kube-apiserver.service | grep "\--tls-private-key-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
|
||||
PKI=/var/lib/kubernetes/pki
|
||||
CACERT="${PKI}/ca.crt"
|
||||
APICERT="${PKI}/kube-apiserver.crt"
|
||||
APIKEY="${PKI}/kube-apiserver.key"
|
||||
SACERT="${PKI}/service-account.crt"
|
||||
KCCERT="${PKI}/apiserver-kubelet-client.crt"
|
||||
KCKEY="${PKI}/apiserver-kubelet-client.key"
|
||||
if [ $ADVERTISE_ADDRESS == $PRIMARY_IP ] && [ $CLIENT_CA_FILE == $CACERT ] && [ $ETCD_CA_FILE == $CACERT ] && \
|
||||
[ $ETCD_CERT_FILE == "${PKI}/etcd-server.crt" ] && [ $ETCD_KEY_FILE == "${PKI}/etcd-server.key" ] && \
|
||||
[ $KUBELET_CERTIFICATE_AUTHORITY == $CACERT ] && [ $KUBELET_CLIENT_CERTIFICATE == $KCCERT ] && [ $KUBELET_CLIENT_KEY == $KCKEY ] && \
|
||||
[ $SERVICE_ACCOUNT_KEY_FILE == $SACERT ] && [ $TLS_CERT_FILE == $APICERT ] && [ $TLS_PRIVATE_KEY_FILE == $APIKEY ]
|
||||
then
|
||||
printf "${SUCCESS}kube-apiserver advertise-address/ client-ca-file/ etcd-cafile/ etcd-certfile/ etcd-keyfile/ kubelet-certificate-authority/ kubelet-client-certificate/ kubelet-client-key/ service-account-key-file/ tls-cert-file/ tls-private-key-file are correct\n${NC}"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the kube-apiserver systemd file, check advertise-address/ client-ca-file/ etcd-cafile/ etcd-certfile/ etcd-keyfile/ kubelet-certificate-authority/ kubelet-client-certificate/ kubelet-client-key/ service-account-key-file/ tls-cert-file/ tls-private-key-file. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server\n${NC}"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}kube-apiserver.crt / kube-apiserver.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server\n${NC}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_systemd_kcm()
|
||||
{
|
||||
KCMCERT=/var/lib/kubernetes/pki/kube-controller-manager.crt
|
||||
KCMKEY=/var/lib/kubernetes/pki/kube-controller-manager.key
|
||||
CACERT=/var/lib/kubernetes/pki/ca.crt
|
||||
CAKEY=/var/lib/kubernetes/pki/ca.key
|
||||
SAKEY=/var/lib/kubernetes/pki/service-account.key
|
||||
KCMKUBECONFIG=/var/lib/kubernetes/kube-controller-manager.kubeconfig
|
||||
if [ -z $KCMCERT ] && [ -z $KCMKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify cert and key location\n${NC}"
|
||||
exit 1
|
||||
elif [ -f $SYSTEMD_KCM_FILE ]
|
||||
then
|
||||
printf "Systemd for kube-controller-manager service found, verifying the authenticity\n"
|
||||
CLUSTER_SIGNING_CERT_FILE=$(systemctl cat kube-controller-manager.service | grep "\--cluster-signing-cert-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
CLUSTER_SIGNING_KEY_FILE=$(systemctl cat kube-controller-manager.service | grep "\--cluster-signing-key-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
KUBECONFIG=$(systemctl cat kube-controller-manager.service | grep "\--kubeconfig" | awk '{print $1}' | cut -d "=" -f2)
|
||||
ROOT_CA_FILE=$(systemctl cat kube-controller-manager.service | grep "\--root-ca-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
SERVICE_ACCOUNT_PRIVATE_KEY_FILE=$(systemctl cat kube-controller-manager.service | grep "\--service-account-private-key-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
|
||||
if [ $CLUSTER_SIGNING_CERT_FILE == $CACERT ] && [ $CLUSTER_SIGNING_KEY_FILE == $CAKEY ] && [ $KUBECONFIG == $KCMKUBECONFIG ] && \
|
||||
[ $ROOT_CA_FILE == $CACERT ] && [ $SERVICE_ACCOUNT_PRIVATE_KEY_FILE == $SAKEY ]
|
||||
then
|
||||
printf "${SUCCESS}kube-controller-manager cluster-signing-cert-file, cluster-signing-key-file, kubeconfig, root-ca-file, service-account-private-key-file are correct\n${NC}"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the kube-controller-manager cluster-signing-cert-file, cluster-signing-key-file, kubeconfig, root-ca-file, service-account-private-key-file. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-controller-manager\n${NC}"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}kube-controller-manager.crt / kube-controller-manager.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-controller-manager\n${NC}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_systemd_ks()
|
||||
{
|
||||
KSCERT=/var/lib/kubernetes/pki/kube-scheduler.crt
|
||||
KSKEY=/var/lib/kubernetes/pki/kube-scheduler.key
|
||||
KSKUBECONFIG=/var/lib/kubernetes/kube-scheduler.kubeconfig
|
||||
|
||||
if [ -z $KSCERT ] && [ -z $KSKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify cert and key location\n${NC}"
|
||||
exit 1
|
||||
elif [ -f $SYSTEMD_KS_FILE ]
|
||||
then
|
||||
printf "Systemd for kube-scheduler service found, verifying the authenticity\n"
|
||||
|
||||
KUBECONFIG=$(systemctl cat kube-scheduler.service | grep "\--kubeconfig"| awk '{print $1}'| cut -d "=" -f2)
|
||||
|
||||
if [ $KUBECONFIG == $KSKUBECONFIG ]
|
||||
then
|
||||
printf "${SUCCESS}kube-scheduler --kubeconfig is correct\n${NC}"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the kube-scheduler --kubeconfig. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-scheduler\n${NC}"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}kube-scheduler.crt / kube-scheduler.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-scheduler\n${NC}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# END OF Function - Master node #
|
||||
|
||||
if [ ! -z "$1" ]
|
||||
then
|
||||
choice=$1
|
||||
else
|
||||
echo "This script will validate the certificates in master as well as node01 nodes. Before proceeding, make sure you ssh into the respective node [ Master or Worker-1 ] for certificate validation"
|
||||
while true
|
||||
do
|
||||
echo
|
||||
echo " 1. Verify certificates on Master Nodes after step 4"
|
||||
echo " 2. Verify kubeconfigs on Master Nodes after step 5"
|
||||
echo " 3. Verify kubeconfigs and PKI on Master Nodes after step 8"
|
||||
echo " 4. Verify kubeconfigs and PKI on node01 Node after step 10"
|
||||
echo " 5. Verify kubeconfigs and PKI on node02 Node after step 11"
|
||||
echo
|
||||
echo -n "Please select one of the above options: "
|
||||
read choice
|
||||
|
||||
[ -z "$choice" ] && continue
|
||||
[ $choice -gt 0 -a $choice -lt 6 ] && break
|
||||
done
|
||||
fi
|
||||
|
||||
HOST=$(hostname -s)
|
||||
|
||||
CERT_ISSUER="Issuer:CN=KUBERNETES-CA,O=Kubernetes"
|
||||
SUBJ_CA="Subject:CN=KUBERNETES-CA,O=Kubernetes"
|
||||
SUBJ_ADMIN="Subject:CN=admin,O=system:masters"
|
||||
SUBJ_KCM="Subject:CN=system:kube-controller-manager,O=system:kube-controller-manager"
|
||||
SUBJ_KP="Subject:CN=system:kube-proxy,O=system:node-proxier"
|
||||
SUBJ_KS="Subject:CN=system:kube-scheduler,O=system:kube-scheduler"
|
||||
SUBJ_API="Subject:CN=kube-apiserver,O=Kubernetes"
|
||||
SUBJ_SA="Subject:CN=service-accounts,O=Kubernetes"
|
||||
SUBJ_ETCD="Subject:CN=etcd-server,O=Kubernetes"
|
||||
SUBJ_APIKC="Subject:CN=kube-apiserver-kubelet-client,O=system:masters"
|
||||
|
||||
case $choice in
|
||||
|
||||
1)
|
||||
if ! [ "${HOST}" = "controlplane01" -o "${HOST}" = "controlplane02" ]
|
||||
then
|
||||
printf "${FAILED}Must run on controlplane01 or controlplane02${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "The selected option is $choice, proceeding the certificate verification of Master node"
|
||||
|
||||
CERT_LOCATION=$HOME
|
||||
check_cert_and_key "ca" $SUBJ_CA $CERT_ISSUER
|
||||
check_cert_and_key "kube-apiserver" $SUBJ_API $CERT_ISSUER
|
||||
check_cert_and_key "kube-controller-manager" $SUBJ_KCM $CERT_ISSUER
|
||||
check_cert_and_key "kube-scheduler" $SUBJ_KS $CERT_ISSUER
|
||||
check_cert_and_key "service-account" $SUBJ_SA $CERT_ISSUER
|
||||
check_cert_and_key "apiserver-kubelet-client" $SUBJ_APIKC $CERT_ISSUER
|
||||
check_cert_and_key "etcd-server" $SUBJ_ETCD $CERT_ISSUER
|
||||
|
||||
if [ "${HOST}" = "controlplane01" ]
|
||||
then
|
||||
check_cert_and_key "admin" $SUBJ_ADMIN $CERT_ISSUER
|
||||
check_cert_and_key "kube-proxy" $SUBJ_KP $CERT_ISSUER
|
||||
fi
|
||||
;;
|
||||
|
||||
2)
|
||||
if ! [ "${HOST}" = "controlplane01" -o "${HOST}" = "controlplane02" ]
|
||||
then
|
||||
printf "${FAILED}Must run on controlplane01 or controlplane02${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
check_cert_adminkubeconfig
|
||||
check_kubeconfig_exists "kube-controller-manager" $HOME
|
||||
check_kubeconfig_exists "kube-scheduler" $HOME
|
||||
|
||||
if [ "${HOST}" = "controlplane01" ]
|
||||
then
|
||||
check_kubeconfig_exists "kube-proxy" $HOME
|
||||
fi
|
||||
;;
|
||||
|
||||
3)
|
||||
if ! [ "${HOST}" = "controlplane01" -o "${HOST}" = "controlplane02" ]
|
||||
then
|
||||
printf "${FAILED}Must run on controlplane01 or controlplane02${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CERT_LOCATION=/etc/etcd
|
||||
check_cert_only "ca" $SUBJ_CA $CERT_ISSUER
|
||||
check_cert_and_key "etcd-server" $SUBJ_ETCD $CERT_ISSUER
|
||||
|
||||
CERT_LOCATION=/var/lib/kubernetes/pki
|
||||
check_cert_and_key "ca" $SUBJ_CA $CERT_ISSUER
|
||||
check_cert_and_key "kube-apiserver" $SUBJ_API $CERT_ISSUER
|
||||
check_cert_and_key "kube-controller-manager" $SUBJ_KCM $CERT_ISSUER
|
||||
check_cert_and_key "kube-scheduler" $SUBJ_KS $CERT_ISSUER
|
||||
check_cert_and_key "service-account" $SUBJ_SA $CERT_ISSUER
|
||||
check_cert_and_key "apiserver-kubelet-client" $SUBJ_APIKC $CERT_ISSUER
|
||||
check_cert_and_key "etcd-server" $SUBJ_ETCD $CERT_ISSUER
|
||||
|
||||
check_kubeconfig "kube-controller-manager" "/var/lib/kubernetes" "https://127.0.0.1:6443"
|
||||
check_kubeconfig "kube-scheduler" "/var/lib/kubernetes" "https://127.0.0.1:6443"
|
||||
|
||||
check_systemd_api
|
||||
check_systemd_etcd
|
||||
check_systemd_kcm
|
||||
check_systemd_ks
|
||||
;;
|
||||
|
||||
4)
|
||||
if ! [ "${HOST}" = "node01" ]
|
||||
then
|
||||
printf "${FAILED}Must run on node01${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CERT_LOCATION=/var/lib/kubernetes/pki
|
||||
check_cert_only "ca" $SUBJ_CA $CERT_ISSUER
|
||||
check_cert_and_key "kube-proxy" $SUBJ_KP $CERT_ISSUER
|
||||
check_cert_and_key "node01" "Subject:CN=system:node:node01,O=system:nodes" $CERT_ISSUER
|
||||
check_kubeconfig "kube-proxy" "/var/lib/kube-proxy" "https://${LOADBALANCER}:6443"
|
||||
check_kubeconfig "kubelet" "/var/lib/kubelet" "https://${LOADBALANCER}:6443"
|
||||
;;
|
||||
|
||||
5)
|
||||
if ! [ "${HOST}" = "node02" ]
|
||||
then
|
||||
printf "${FAILED}Must run on node02${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CERT_LOCATION=/var/lib/kubernetes/pki
|
||||
check_cert_only "ca" $SUBJ_CA $CERT_ISSUER
|
||||
check_cert_and_key "kube-proxy" $SUBJ_KP $CERT_ISSUER
|
||||
|
||||
CERT_LOCATION=/var/lib/kubelet/pki
|
||||
check_cert_only "kubelet-client-current" "Subject:O=system:nodes,CN=system:node:node02" $CERT_ISSUER
|
||||
check_kubeconfig "kube-proxy" "/var/lib/kube-proxy" "https://${LOADBALANCER}:6443"
|
||||
;;
|
||||
|
||||
|
||||
*)
|
||||
printf "${FAILED}Exiting.... Please select the valid option either 1 or 2\n${NC}"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
|
@ -1,16 +1,16 @@
|
|||
# Installing the Client Tools
|
||||
|
||||
First identify a system from where you will perform administrative tasks, such as creating certificates, `kubeconfig` files and distributing them to the different VMs.
|
||||
From this point on, the steps are *exactly* the same for VirtualBox and Apple Silicon as it is now about configuring Kubernetes itself on the Linux hosts which you have now provisioned.
|
||||
|
||||
If you are on a Linux laptop, then your laptop could be this system. In my case I chose the `master-1` node to perform administrative tasks. Whichever system you chose make sure that system is able to access all the provisioned VMs through SSH to copy files over.
|
||||
Begin by logging into `controlplane01` using `vagrant ssh` for VirtualBox, or `multipass shell` for Apple Silicon.
|
||||
|
||||
## Access all VMs
|
||||
|
||||
Here we create an SSH key pair for the `vagrant` user who we are logged in as. We will copy the public key of this pair to the other master and both workers to permit us to use password-less SSH (and SCP) go get from `master-1` to these other nodes in the context of the `vagrant` user which exists on all nodes.
|
||||
Here we create an SSH key pair for the user who we are logged in as (this is `vagrant` on VirtualBox, `ubuntu` on Apple Silicon). We will copy the public key of this pair to the other controlplane and both workers to permit us to use password-less SSH (and SCP) go get from `controlplane01` to these other nodes in the context of the user which exists on all nodes.
|
||||
|
||||
Generate SSH key pair on `master-1` node:
|
||||
Generate SSH key pair on `controlplane01` node:
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
```bash
|
||||
ssh-keygen
|
||||
|
@ -18,32 +18,52 @@ ssh-keygen
|
|||
|
||||
Leave all settings to default by pressing `ENTER` at any prompt.
|
||||
|
||||
Add this key to the local `authorized_keys` (`master-1`) as in some commands we `scp` to ourself.
|
||||
Add this key to the local `authorized_keys` (`controlplane01`) as in some commands we `scp` to ourself.
|
||||
|
||||
```bash
|
||||
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
|
||||
```
|
||||
|
||||
Copy the key to the other hosts. For this step please enter `vagrant` where a password is requested.
|
||||
Copy the key to the other hosts. You will be asked to enter a password for each of the `ssh-copy-id` commands. The password is:
|
||||
* VirtualBox - `vagrant`
|
||||
* Apple Silicon: `ubuntu`
|
||||
|
||||
The option `-o StrictHostKeyChecking=no` tells it not to ask if you want to connect to a previously unknown host. Not best practice in the real world, but speeds things up here.
|
||||
|
||||
`$(whoami)` selects the appropriate user name to connect to the remote VMs. On VirtualBox this evaluates to `vagrant`; on Apple Silicon it is `ubuntu`.
|
||||
|
||||
```bash
|
||||
ssh-copy-id -o StrictHostKeyChecking=no vagrant@master-2
|
||||
ssh-copy-id -o StrictHostKeyChecking=no vagrant@loadbalancer
|
||||
ssh-copy-id -o StrictHostKeyChecking=no vagrant@worker-1
|
||||
ssh-copy-id -o StrictHostKeyChecking=no vagrant@worker-2
|
||||
ssh-copy-id -o StrictHostKeyChecking=no $(whoami)@controlplane02
|
||||
ssh-copy-id -o StrictHostKeyChecking=no $(whoami)@loadbalancer
|
||||
ssh-copy-id -o StrictHostKeyChecking=no $(whoami)@node01
|
||||
ssh-copy-id -o StrictHostKeyChecking=no $(whoami)@node02
|
||||
```
|
||||
|
||||
|
||||
|
||||
For each host, the output should be similar to this. If it is not, then you may have entered an incorrect password. Retry the step.
|
||||
|
||||
```
|
||||
Number of key(s) added: 1
|
||||
|
||||
Now try logging into the machine, with: "ssh 'vagrant@master-2'"
|
||||
and check to make sure that only the key(s) you wanted were added.
|
||||
```
|
||||
|
||||
Verify connection
|
||||
|
||||
```
|
||||
ssh controlplane01
|
||||
exit
|
||||
|
||||
ssh controlplane02
|
||||
exit
|
||||
|
||||
ssh node01
|
||||
exit
|
||||
|
||||
ssh node02
|
||||
exit
|
||||
```
|
||||
|
||||
|
||||
## Install kubectl
|
||||
|
||||
The [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl) command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
|
||||
|
@ -52,10 +72,12 @@ Reference: [https://kubernetes.io/docs/tasks/tools/install-kubectl/](https://kub
|
|||
|
||||
We will be using `kubectl` early on to generate `kubeconfig` files for the controlplane components.
|
||||
|
||||
The environment variable `ARCH` is pre-set during VM deployment according to whether using VirtualBox (`amd64`) or Apple Silicon (`arm64`) to ensure the correct version of this and later software is downloaded for your machine architecture.
|
||||
|
||||
### Linux
|
||||
|
||||
```bash
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
|
||||
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/${ARCH}/kubectl"
|
||||
chmod +x kubectl
|
||||
sudo mv kubectl /usr/local/bin/
|
||||
```
|
||||
|
@ -65,29 +87,15 @@ sudo mv kubectl /usr/local/bin/
|
|||
Verify `kubectl` is installed:
|
||||
|
||||
```
|
||||
kubectl version -o yaml
|
||||
kubectl version --client
|
||||
```
|
||||
|
||||
output will be similar to this, although versions may be newer:
|
||||
|
||||
```
|
||||
kubectl version -o yaml
|
||||
clientVersion:
|
||||
buildDate: "2023-11-15T16:58:22Z"
|
||||
compiler: gc
|
||||
gitCommit: bae2c62678db2b5053817bc97181fcc2e8388103
|
||||
gitTreeState: clean
|
||||
gitVersion: v1.28.4
|
||||
goVersion: go1.20.11
|
||||
major: "1"
|
||||
minor: "28"
|
||||
platform: linux/amd64
|
||||
kustomizeVersion: v5.0.4-0.20230601165947-6ce0bf390ce3
|
||||
|
||||
The connection to the server localhost:8080 was refused - did you specify the right host or port?
|
||||
Client Version: v1.29.0
|
||||
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
|
||||
```
|
||||
|
||||
Don't worry about the error at the end as it is expected. We have not set anything up yet!
|
||||
|
||||
Prev: [Compute Resources](02-compute-resources.md)<br>
|
||||
Next: [Certificate Authority](04-certificate-authority.md)
|
||||
Next: [Certificate Authority](04-certificate-authority.md)<br>
|
||||
Prev: Compute Resources ([VirtualBox](../VirtualBox/docs/02-compute-resources.md)), ([Apple Silicon](../apple-silicon/docs/02-compute-resources.md))
|
|
@ -4,23 +4,23 @@ In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/w
|
|||
|
||||
# Where to do these?
|
||||
|
||||
You can do these on any machine with `openssl` on it. But you should be able to copy the generated files to the provisioned VMs. Or just do these from one of the master nodes.
|
||||
You can do these on any machine with `openssl` on it. But you should be able to copy the generated files to the provisioned VMs. Or just do these from one of the controlplane nodes.
|
||||
|
||||
In our case we do the following steps on the `master-1` node, as we have set it up to be the administrative client.
|
||||
In our case we do the following steps on the `controlplane01` node, as we have set it up to be the administrative client.
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
## Certificate Authority
|
||||
|
||||
In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
|
||||
|
||||
Query IPs of hosts we will insert as certificate subject alternative names (SANs), which will be read from `/etc/hosts`. Note that doing this allows us to change the VM network range more easily from the default for these labs which is `192.168.56.0/24`
|
||||
Query IPs of hosts we will insert as certificate subject alternative names (SANs), which will be read from `/etc/hosts`.
|
||||
|
||||
Set up environment variables. Run the following:
|
||||
|
||||
```bash
|
||||
MASTER_1=$(dig +short master-1)
|
||||
MASTER_2=$(dig +short master-2)
|
||||
CONTROL01=$(dig +short controlplane01)
|
||||
CONTROL02=$(dig +short controlplane02)
|
||||
LOADBALANCER=$(dig +short loadbalancer)
|
||||
```
|
||||
|
||||
|
@ -34,14 +34,14 @@ API_SERVICE=$(echo $SERVICE_CIDR | awk 'BEGIN {FS="."} ; { printf("%s.%s.%s.1",
|
|||
Check that the environment variables are set. Run the following:
|
||||
|
||||
```bash
|
||||
echo $MASTER_1
|
||||
echo $MASTER_2
|
||||
echo $CONTROL01
|
||||
echo $CONTROL02
|
||||
echo $LOADBALANCER
|
||||
echo $SERVICE_CIDR
|
||||
echo $API_SERVICE
|
||||
```
|
||||
|
||||
The output should look like this. If you changed any of the defaults mentioned in the [prerequisites](./01-prerequisites.md) page, then addresses may differ.
|
||||
The output should look like this with one IP address per line. If you changed any of the defaults mentioned in the [prerequisites](./01-prerequisites.md) page, then addresses may differ. The first 3 addresses will also be different for Apple Silicon on Multipass (likely 192.168.64.x).
|
||||
|
||||
```
|
||||
192.168.56.11
|
||||
|
@ -51,7 +51,7 @@ The output should look like this. If you changed any of the defaults mentioned i
|
|||
10.96.0.1
|
||||
```
|
||||
|
||||
Create a CA certificate, then generate a Certificate Signing Request and use it to create a private key:
|
||||
Create a CA certificate by first creating a private key, then using it to create a certificate signing request, then self-signing the new certificate with our key.
|
||||
|
||||
```bash
|
||||
{
|
||||
|
@ -78,12 +78,14 @@ Reference : https://kubernetes.io/docs/tasks/administer-cluster/certificates/#op
|
|||
The `ca.crt` is the Kubernetes Certificate Authority certificate and `ca.key` is the Kubernetes Certificate Authority private key.
|
||||
You will use the `ca.crt` file in many places, so it will be copied to many places.
|
||||
|
||||
The `ca.key` is used by the CA for signing certificates. And it should be securely stored. In this case our master node(s) is our CA server as well, so we will store it on master node(s). There is no need to copy this file elsewhere.
|
||||
The `ca.key` is used by the CA for signing certificates. And it should be securely stored. In this case our controlplane node(s) is our CA server as well, so we will store it on controlplane node(s). There is no need to copy this file elsewhere.
|
||||
|
||||
## Client and Server Certificates
|
||||
|
||||
In this section you will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes `admin` user.
|
||||
|
||||
To better understand the role of client certificates with respect to users and groups, see [this informative video](https://youtu.be/I-iVrIWfMl8). Note that all the kubenetes services below are themselves cluster users.
|
||||
|
||||
### The Admin Client Certificate
|
||||
|
||||
Generate the `admin` client certificate and private key:
|
||||
|
@ -191,7 +193,7 @@ kube-scheduler.crt
|
|||
|
||||
### The Kubernetes API Server Certificate
|
||||
|
||||
The kube-apiserver certificate requires all names that various components may reach it to be part of the alternate names. These include the different DNS names, and IP addresses such as the master servers IP address, the load balancers IP address, the kube-api service IP address etc.
|
||||
The kube-apiserver certificate requires all names that various components may reach it to be part of the alternate names. These include the different DNS names, and IP addresses such as the controlplane servers IP address, the load balancers IP address, the kube-api service IP address etc. These provide an *identity* for the certificate, which is key in the SSL process for a server to prove who it is.
|
||||
|
||||
The `openssl` command cannot take alternate names as command line parameter. So we must create a `conf` file for it:
|
||||
|
||||
|
@ -213,8 +215,8 @@ DNS.3 = kubernetes.default.svc
|
|||
DNS.4 = kubernetes.default.svc.cluster
|
||||
DNS.5 = kubernetes.default.svc.cluster.local
|
||||
IP.1 = ${API_SERVICE}
|
||||
IP.2 = ${MASTER_1}
|
||||
IP.3 = ${MASTER_2}
|
||||
IP.2 = ${CONTROL01}
|
||||
IP.3 = ${CONTROL02}
|
||||
IP.4 = ${LOADBALANCER}
|
||||
IP.5 = 127.0.0.1
|
||||
EOF
|
||||
|
@ -241,7 +243,7 @@ kube-apiserver.crt
|
|||
kube-apiserver.key
|
||||
```
|
||||
|
||||
# The Kubelet Client Certificate
|
||||
### The API Server Kubelet Client Certificate
|
||||
|
||||
This certificate is for the API server to authenticate with the kubelets when it requests information from them
|
||||
|
||||
|
@ -282,7 +284,7 @@ apiserver-kubelet-client.key
|
|||
|
||||
### The ETCD Server Certificate
|
||||
|
||||
Similarly ETCD server certificate must have addresses of all the servers part of the ETCD cluster
|
||||
Similarly ETCD server certificate must have addresses of all the servers part of the ETCD cluster. Similarly, this is a server certificate, which is again all about proving identity.
|
||||
|
||||
The `openssl` command cannot take alternate names as command line parameter. So we must create a `conf` file for it:
|
||||
|
||||
|
@ -297,8 +299,8 @@ basicConstraints = CA:FALSE
|
|||
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
IP.1 = ${MASTER_1}
|
||||
IP.2 = ${MASTER_2}
|
||||
IP.1 = ${CONTROL01}
|
||||
IP.2 = ${CONTROL02}
|
||||
IP.3 = 127.0.0.1
|
||||
EOF
|
||||
```
|
||||
|
@ -326,7 +328,7 @@ etcd-server.crt
|
|||
|
||||
## The Service Account Key Pair
|
||||
|
||||
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as describe in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
|
||||
The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/) documentation.
|
||||
|
||||
Generate the `service-account` certificate and private key:
|
||||
|
||||
|
@ -355,7 +357,7 @@ Run the following, and select option 1 to check all required certificates were g
|
|||
|
||||
[//]: # (command:./cert_verify.sh 1)
|
||||
|
||||
```bash
|
||||
```
|
||||
./cert_verify.sh
|
||||
```
|
||||
|
||||
|
@ -373,7 +375,7 @@ Copy the appropriate certificates and private keys to each instance:
|
|||
|
||||
```bash
|
||||
{
|
||||
for instance in master-1 master-2; do
|
||||
for instance in controlplane01 controlplane02; do
|
||||
scp -o StrictHostKeyChecking=no ca.crt ca.key kube-apiserver.key kube-apiserver.crt \
|
||||
apiserver-kubelet-client.crt apiserver-kubelet-client.key \
|
||||
service-account.key service-account.crt \
|
||||
|
@ -383,21 +385,21 @@ for instance in master-1 master-2; do
|
|||
${instance}:~/
|
||||
done
|
||||
|
||||
for instance in worker-1 worker-2 ; do
|
||||
for instance in node01 node02 ; do
|
||||
scp ca.crt kube-proxy.crt kube-proxy.key ${instance}:~/
|
||||
done
|
||||
}
|
||||
```
|
||||
|
||||
## Optional - Check Certificates on master-2
|
||||
## Optional - Check Certificates on controlplane02
|
||||
|
||||
At `master-2` node run the following, selecting option 1
|
||||
At `controlplane02` node run the following, selecting option 1
|
||||
|
||||
[//]: # (commandssh master-2 './cert_verify.sh 1')
|
||||
[//]: # (commandssh controlplane02 './cert_verify.sh 1')
|
||||
|
||||
```
|
||||
./cert_verify.sh
|
||||
```
|
||||
|
||||
Prev: [Client tools](03-client-tools.md)<br>
|
||||
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)
|
||||
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)<br>
|
||||
Prev: [Client tools](03-client-tools.md)
|
||||
|
|
|
@ -14,7 +14,7 @@ In this section you will generate kubeconfig files for the `controller manager`,
|
|||
|
||||
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the load balancer will be used, so let's first get the address of the loadbalancer into a shell variable such that we can use it in the kubeconfigs for services that run on worker nodes. The controller manager and scheduler need to talk to the local API server, hence they use the localhost address.
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
```bash
|
||||
LOADBALANCER=$(dig +short loadbalancer)
|
||||
|
@ -161,7 +161,7 @@ Reference docs for kubeconfig [here](https://kubernetes.io/docs/tasks/access-app
|
|||
Copy the appropriate `kube-proxy` kubeconfig files to each worker instance:
|
||||
|
||||
```bash
|
||||
for instance in worker-1 worker-2; do
|
||||
for instance in node01 node02; do
|
||||
scp kube-proxy.kubeconfig ${instance}:~/
|
||||
done
|
||||
```
|
||||
|
@ -169,22 +169,22 @@ done
|
|||
Copy the appropriate `admin.kubeconfig`, `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
|
||||
|
||||
```bash
|
||||
for instance in master-1 master-2; do
|
||||
for instance in controlplane01 controlplane02; do
|
||||
scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
|
||||
done
|
||||
```
|
||||
|
||||
## Optional - Check kubeconfigs
|
||||
|
||||
At `master-1` and `master-2` nodes, run the following, selecting option 2
|
||||
At `controlplane01` and `controlplane02` nodes, run the following, selecting option 2
|
||||
|
||||
[//]: # (command./cert_verify.sh 2)
|
||||
[//]: # (command:ssh master-2 './cert_verify.sh 2')
|
||||
[//]: # (command:ssh controlplane02 './cert_verify.sh 2')
|
||||
|
||||
```
|
||||
./cert_verify.sh
|
||||
```
|
||||
|
||||
|
||||
Prev: [Certificate Authority](04-certificate-authority.md)<br>
|
||||
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
|
||||
Next: [Generating the Data Encryption Config and Key](./06-data-encryption-keys.md)<br>
|
||||
Prev: [Certificate Authority](./04-certificate-authority.md)
|
||||
|
|
|
@ -6,9 +6,9 @@ In this lab you will generate an encryption key and an [encryption config](https
|
|||
|
||||
## The Encryption Key
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
Generate an encryption key:
|
||||
Generate an encryption key. This is simply 32 bytes of random data, which we base64 encode:
|
||||
|
||||
```bash
|
||||
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
|
||||
|
@ -37,7 +37,7 @@ EOF
|
|||
Copy the `encryption-config.yaml` encryption config file to each controller instance:
|
||||
|
||||
```bash
|
||||
for instance in master-1 master-2; do
|
||||
for instance in controlplane01 controlplane02; do
|
||||
scp encryption-config.yaml ${instance}:~/
|
||||
done
|
||||
```
|
||||
|
@ -45,7 +45,7 @@ done
|
|||
Move `encryption-config.yaml` encryption config file to appropriate directory.
|
||||
|
||||
```bash
|
||||
for instance in master-1 master-2; do
|
||||
for instance in controlplane01 controlplane02; do
|
||||
ssh ${instance} sudo mkdir -p /var/lib/kubernetes/
|
||||
ssh ${instance} sudo mv encryption-config.yaml /var/lib/kubernetes/
|
||||
done
|
||||
|
@ -53,5 +53,5 @@ done
|
|||
|
||||
Reference: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data
|
||||
|
||||
Prev: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)<br>
|
||||
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
|
||||
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)<br>
|
||||
Prev: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)
|
||||
|
|
|
@ -2,9 +2,11 @@
|
|||
|
||||
Kubernetes components are stateless and store cluster state in [etcd](https://etcd.io/). In this lab you will bootstrap a two node etcd cluster and configure it for high availability and secure remote access.
|
||||
|
||||
If you examine the command line arguments passed to etcd in its unit file, you should recognise some of the certificates and keys created in earlier sections of this course.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The commands in this lab must be run on each controller instance: `master-1`, and `master-2`. Login to each of these using an SSH terminal.
|
||||
The commands in this lab must be run on each controller instance: `controlplane01`, and `controlplane02`. Login to each of these using an SSH terminal.
|
||||
|
||||
### Running commands in parallel with tmux
|
||||
|
||||
|
@ -16,21 +18,21 @@ The commands in this lab must be run on each controller instance: `master-1`, an
|
|||
|
||||
Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project:
|
||||
|
||||
[//]: # (host:master-1-master2)
|
||||
[//]: # (host:controlplane01-controlplane02)
|
||||
|
||||
|
||||
```bash
|
||||
ETCD_VERSION="v3.5.9"
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
"https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz"
|
||||
"https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-${ARCH}.tar.gz"
|
||||
```
|
||||
|
||||
Extract and install the `etcd` server and the `etcdctl` command line utility:
|
||||
|
||||
```bash
|
||||
{
|
||||
tar -xvf etcd-${ETCD_VERSION}-linux-amd64.tar.gz
|
||||
sudo mv etcd-${ETCD_VERSION}-linux-amd64/etcd* /usr/local/bin/
|
||||
tar -xvf etcd-${ETCD_VERSION}-linux-${ARCH}.tar.gz
|
||||
sudo mv etcd-${ETCD_VERSION}-linux-${ARCH}/etcd* /usr/local/bin/
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -52,12 +54,11 @@ Copy and secure certificates. Note that we place `ca.crt` in our main PKI direct
|
|||
```
|
||||
|
||||
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers.<br>
|
||||
Retrieve the internal IP address of the master(etcd) nodes, and also that of master-1 and master-2 for the etcd cluster member list
|
||||
Retrieve the internal IP address of the controlplane(etcd) nodes, and also that of controlplane01 and controlplane02 for the etcd cluster member list
|
||||
|
||||
```bash
|
||||
INTERNAL_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)
|
||||
MASTER_1=$(dig +short master-1)
|
||||
MASTER_2=$(dig +short master-2)
|
||||
CONTROL01=$(dig +short controlplane01)
|
||||
CONTROL02=$(dig +short controlplane02)
|
||||
```
|
||||
|
||||
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
|
||||
|
@ -85,12 +86,12 @@ ExecStart=/usr/local/bin/etcd \\
|
|||
--peer-trusted-ca-file=/etc/etcd/ca.crt \\
|
||||
--peer-client-cert-auth \\
|
||||
--client-cert-auth \\
|
||||
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
|
||||
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
|
||||
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
|
||||
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
|
||||
--initial-advertise-peer-urls https://${PRIMARY_IP}:2380 \\
|
||||
--listen-peer-urls https://${PRIMARY_IP}:2380 \\
|
||||
--listen-client-urls https://${PRIMARY_IP}:2379,https://127.0.0.1:2379 \\
|
||||
--advertise-client-urls https://${PRIMARY_IP}:2379 \\
|
||||
--initial-cluster-token etcd-cluster-0 \\
|
||||
--initial-cluster master-1=https://${MASTER_1}:2380,master-2=https://${MASTER_2}:2380 \\
|
||||
--initial-cluster controlplane01=https://${CONTROL01}:2380,controlplane02=https://${CONTROL02}:2380 \\
|
||||
--initial-cluster-state new \\
|
||||
--data-dir=/var/lib/etcd
|
||||
Restart=on-failure
|
||||
|
@ -111,13 +112,15 @@ EOF
|
|||
}
|
||||
```
|
||||
|
||||
> Remember to run the above commands on each controller node: `master-1`, and `master-2`.
|
||||
> Remember to run the above commands on each controller node: `controlplane01`, and `controlplane02`.
|
||||
|
||||
## Verification
|
||||
|
||||
[//]: # (sleep:5)
|
||||
|
||||
List the etcd cluster members:
|
||||
List the etcd cluster members.
|
||||
|
||||
After running the abovre commands on both controlplane nodes, run the following on either or both of `controlplane01` and `controlplane02`
|
||||
|
||||
```bash
|
||||
sudo ETCDCTL_API=3 etcdctl member list \
|
||||
|
@ -127,14 +130,14 @@ sudo ETCDCTL_API=3 etcdctl member list \
|
|||
--key=/etc/etcd/etcd-server.key
|
||||
```
|
||||
|
||||
> output
|
||||
Output will be similar to this
|
||||
|
||||
```
|
||||
45bf9ccad8d8900a, started, master-2, https://192.168.56.12:2380, https://192.168.56.12:2379
|
||||
54a5796a6803f252, started, master-1, https://192.168.56.11:2380, https://192.168.56.11:2379
|
||||
45bf9ccad8d8900a, started, controlplane02, https://192.168.56.12:2380, https://192.168.56.12:2379
|
||||
54a5796a6803f252, started, controlplane01, https://192.168.56.11:2380, https://192.168.56.11:2379
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#starting-etcd-clusters
|
||||
|
||||
Prev: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)]<br>
|
||||
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)
|
||||
Next: [Bootstrapping the Kubernetes Control Plane](./08-bootstrapping-kubernetes-controllers.md)<br>
|
||||
Prev: [Generating the Data Encryption Config and Key](./06-data-encryption-keys.md)
|
||||
|
|
|
@ -2,17 +2,20 @@
|
|||
|
||||
In this lab you will bootstrap the Kubernetes control plane across 2 compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
|
||||
|
||||
Note that in a production-ready cluster it is recommended to have an odd number of master nodes as for multi-node services like etcd, leader election and quorum work better. See lecture on this ([KodeKloud](https://kodekloud.com/topic/etcd-in-ha/), [Udemy](https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/learn/lecture/14296192#overview)). We're only using two here to save on RAM on your workstation.
|
||||
Note that in a production-ready cluster it is recommended to have an odd number of controlplane nodes as for multi-node services like etcd, leader election and quorum work better. See lecture on this ([KodeKloud](https://kodekloud.com/topic/etcd-in-ha/), [Udemy](https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/learn/lecture/14296192#overview)). We're only using two here to save on RAM on your workstation.
|
||||
|
||||
|
||||
If you examine the command line arguments passed to the various control plane components, you should recognise many of the files that were created in earlier sections of this course, such as certificates, keys, kubeconfigs, the encryption configuration etc.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The commands in this lab up as far as the load balancer configuration must be run on each controller instance: `master-1`, and `master-2`. Login to each controller instance using SSH Terminal.
|
||||
The commands in this lab up as far as the load balancer configuration must be run on each controller instance: `controlplane01`, and `controlplane02`. Login to each controller instance using SSH Terminal.
|
||||
|
||||
You can perform this step with [tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux).
|
||||
|
||||
## Provision the Kubernetes Control Plane
|
||||
|
||||
[//]: # (host:master-1-master2)
|
||||
[//]: # (host:controlplane01-controlplane02)
|
||||
|
||||
### Download and Install the Kubernetes Controller Binaries
|
||||
|
||||
|
@ -22,10 +25,10 @@ Download the latest official Kubernetes release binaries:
|
|||
KUBE_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
|
||||
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-apiserver" \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-controller-manager" \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-scheduler" \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kubectl"
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kube-apiserver" \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kube-controller-manager" \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kube-scheduler" \
|
||||
"https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kubectl"
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/releases/download/#binaries
|
||||
|
@ -62,15 +65,14 @@ The instance internal IP address will be used to advertise the API Server to mem
|
|||
Retrieve these internal IP addresses:
|
||||
|
||||
```bash
|
||||
INTERNAL_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)
|
||||
LOADBALANCER=$(dig +short loadbalancer)
|
||||
```
|
||||
|
||||
IP addresses of the two master nodes, where the etcd servers are.
|
||||
IP addresses of the two controlplane nodes, where the etcd servers are.
|
||||
|
||||
```bash
|
||||
MASTER_1=$(dig +short master-1)
|
||||
MASTER_2=$(dig +short master-2)
|
||||
CONTROL01=$(dig +short controlplane01)
|
||||
CONTROL02=$(dig +short controlplane02)
|
||||
```
|
||||
|
||||
CIDR ranges used *within* the cluster
|
||||
|
@ -90,7 +92,7 @@ Documentation=https://github.com/kubernetes/kubernetes
|
|||
|
||||
[Service]
|
||||
ExecStart=/usr/local/bin/kube-apiserver \\
|
||||
--advertise-address=${INTERNAL_IP} \\
|
||||
--advertise-address=${PRIMARY_IP} \\
|
||||
--allow-privileged=true \\
|
||||
--apiserver-count=2 \\
|
||||
--audit-log-maxage=30 \\
|
||||
|
@ -105,7 +107,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\
|
|||
--etcd-cafile=/var/lib/kubernetes/pki/ca.crt \\
|
||||
--etcd-certfile=/var/lib/kubernetes/pki/etcd-server.crt \\
|
||||
--etcd-keyfile=/var/lib/kubernetes/pki/etcd-server.key \\
|
||||
--etcd-servers=https://${MASTER_1}:2379,https://${MASTER_2}:2379 \\
|
||||
--etcd-servers=https://${CONTROL01}:2379,https://${CONTROL02}:2379 \\
|
||||
--event-ttl=1h \\
|
||||
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
|
||||
--kubelet-certificate-authority=/var/lib/kubernetes/pki/ca.crt \\
|
||||
|
@ -210,7 +212,7 @@ sudo chmod 600 /var/lib/kubernetes/*.kubeconfig
|
|||
|
||||
## Optional - Check Certificates and kubeconfigs
|
||||
|
||||
At `master-1` and `master-2` nodes, run the following, selecting option 3
|
||||
At `controlplane01` and `controlplane02` nodes, run the following, selecting option 3
|
||||
|
||||
[//]: # (command:./cert_verify.sh 3)
|
||||
|
||||
|
@ -236,6 +238,8 @@ At `master-1` and `master-2` nodes, run the following, selecting option 3
|
|||
|
||||
[//]: # (sleep:10)
|
||||
|
||||
After running the abovre commands on both controlplane nodes, run the following on `controlplane01`
|
||||
|
||||
```bash
|
||||
kubectl get componentstatuses --kubeconfig admin.kubeconfig
|
||||
```
|
||||
|
@ -253,7 +257,7 @@ etcd-0 Healthy {"health": "true"}
|
|||
etcd-1 Healthy {"health": "true"}
|
||||
```
|
||||
|
||||
> Remember to run the above commands on each controller node: `master-1`, and `master-2`.
|
||||
> Remember to run the above commands on each controller node: `controlplane01`, and `controlplane02`.
|
||||
|
||||
## The Kubernetes Frontend Load Balancer
|
||||
|
||||
|
@ -264,7 +268,7 @@ In this section you will provision an external load balancer to front the Kubern
|
|||
|
||||
A NLB operates at [layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_layer) (TCP) meaning it passes the traffic straight through to the back end servers unfettered and does not interfere with the TLS process, leaving this to the Kube API servers.
|
||||
|
||||
Login to `loadbalancer` instance using SSH Terminal.
|
||||
Login to `loadbalancer` instance using `vagrant ssh` (or `multipass shell` on Apple Silicon).
|
||||
|
||||
[//]: # (host:loadbalancer)
|
||||
|
||||
|
@ -273,15 +277,17 @@ Login to `loadbalancer` instance using SSH Terminal.
|
|||
sudo apt-get update && sudo apt-get install -y haproxy
|
||||
```
|
||||
|
||||
Read IP addresses of master nodes and this host to shell variables
|
||||
Read IP addresses of controlplane nodes and this host to shell variables
|
||||
|
||||
```bash
|
||||
MASTER_1=$(dig +short master-1)
|
||||
MASTER_2=$(dig +short master-2)
|
||||
CONTROL01=$(dig +short controlplane01)
|
||||
CONTROL02=$(dig +short controlplane02)
|
||||
LOADBALANCER=$(dig +short loadbalancer)
|
||||
```
|
||||
|
||||
Create HAProxy configuration to listen on API server port on this host and distribute requests evently to the two master nodes.
|
||||
Create HAProxy configuration to listen on API server port on this host and distribute requests evently to the two controlplane nodes.
|
||||
|
||||
We configure it to operate as a [layer 4](https://en.wikipedia.org/wiki/Transport_layer) loadbalancer (using `mode tcp`), which means it forwards any traffic directly to the backends without doing anything like [SSL offloading](https://ssl2buy.com/wiki/ssl-offloading).
|
||||
|
||||
```bash
|
||||
cat <<EOF | sudo tee /etc/haproxy/haproxy.cfg
|
||||
|
@ -289,14 +295,14 @@ frontend kubernetes
|
|||
bind ${LOADBALANCER}:6443
|
||||
option tcplog
|
||||
mode tcp
|
||||
default_backend kubernetes-master-nodes
|
||||
default_backend kubernetes-controlplane-nodes
|
||||
|
||||
backend kubernetes-master-nodes
|
||||
backend kubernetes-controlplane-nodes
|
||||
mode tcp
|
||||
balance roundrobin
|
||||
option tcp-check
|
||||
server master-1 ${MASTER_1}:6443 check fall 3 rise 2
|
||||
server master-2 ${MASTER_2}:6443 check fall 3 rise 2
|
||||
server controlplane01 ${CONTROL01}:6443 check fall 3 rise 2
|
||||
server controlplane02 ${CONTROL02}:6443 check fall 3 rise 2
|
||||
EOF
|
||||
```
|
||||
|
||||
|
@ -311,24 +317,10 @@ sudo systemctl restart haproxy
|
|||
Make a HTTP request for the Kubernetes version info:
|
||||
|
||||
```bash
|
||||
curl https://${LOADBALANCER}:6443/version -k
|
||||
curl -k https://${LOADBALANCER}:6443/version
|
||||
```
|
||||
|
||||
> output
|
||||
This should output some details about the version and build information of the API server.
|
||||
|
||||
```
|
||||
{
|
||||
"major": "1",
|
||||
"minor": "24",
|
||||
"gitVersion": "${KUBE_VERSION}",
|
||||
"gitCommit": "aef86a93758dc3cb2c658dd9657ab4ad4afc21cb",
|
||||
"gitTreeState": "clean",
|
||||
"buildDate": "2022-07-13T14:23:26Z",
|
||||
"goVersion": "go1.18.3",
|
||||
"compiler": "gc",
|
||||
"platform": "linux/amd64"
|
||||
}
|
||||
```
|
||||
|
||||
Prev: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)<br>
|
||||
Next: [Installing CRI on the Kubernetes Worker Nodes](09-install-cri-workers.md)
|
||||
Next: [Installing CRI on the Kubernetes Worker Nodes](./09-install-cri-workers.md)<br>
|
||||
Prev: [Bootstrapping the etcd Cluster](./07-bootstrapping-etcd.md)
|
||||
|
|
|
@ -6,52 +6,90 @@ Reference: https://github.com/containerd/containerd/blob/main/docs/getting-start
|
|||
|
||||
### Download and Install Container Networking
|
||||
|
||||
The commands in this lab must be run on each worker instance: `worker-1`, and `worker-2`. Login to each controller instance using SSH Terminal.
|
||||
The commands in this lab must be run on each worker instance: `node01`, and `node02`. Login to each controller instance using SSH Terminal.
|
||||
|
||||
Here we will install the container runtime `containerd` from the Ubuntu distribution, and kubectl plus the CNI tools from the Kubernetes distribution. Kubectl is required on worker-2 to initialize kubeconfig files for the worker-node auto registration.
|
||||
Here we will install the container runtime `containerd` from the Ubuntu distribution, and kubectl plus the CNI tools from the Kubernetes distribution. Kubectl is required on `node02` to initialize kubeconfig files for the worker-node auto registration.
|
||||
|
||||
[//]: # (host:worker-1-worker-2)
|
||||
[//]: # (host:node01-node02)
|
||||
|
||||
You can perform this step with [tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux).
|
||||
|
||||
Set up the Kubernetes `apt` repository
|
||||
1. Update the apt package index and install packages needed to use the Kubernetes apt repository:
|
||||
```bash
|
||||
{
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y apt-transport-https ca-certificates curl
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
{
|
||||
1. Set up the required kernel modules and make them persistent
|
||||
```bash
|
||||
{
|
||||
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
|
||||
overlay
|
||||
br_netfilter
|
||||
EOF
|
||||
|
||||
sudo modprobe overlay
|
||||
sudo modprobe br_netfilter
|
||||
}
|
||||
```
|
||||
|
||||
1. Set the required kernel parameters and make them persistent
|
||||
```bash
|
||||
{
|
||||
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
EOF
|
||||
|
||||
sudo sysctl --system
|
||||
}
|
||||
```
|
||||
|
||||
1. Determine latest version of Kubernetes and store in a shell variable
|
||||
|
||||
```bash
|
||||
KUBE_LATEST=$(curl -L -s https://dl.k8s.io/release/stable.txt | awk 'BEGIN { FS="." } { printf "%s.%s", $1, $2 }')
|
||||
```
|
||||
|
||||
1. Download the Kubernetes public signing key
|
||||
```bash
|
||||
{
|
||||
sudo mkdir -p /etc/apt/keyrings
|
||||
curl -fsSL https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
|
||||
}
|
||||
```
|
||||
|
||||
1. Add the Kubernetes apt repository
|
||||
```bash
|
||||
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/${KUBE_LATEST}/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
Install `containerd` and CNI tools, first refreshing `apt` repos to get up to date versions.
|
||||
|
||||
```bash
|
||||
{
|
||||
1. Install the container runtime and CNI components
|
||||
```bash
|
||||
sudo apt update
|
||||
sudo apt install -y containerd kubernetes-cni kubectl ipvsadm ipset
|
||||
}
|
||||
```
|
||||
sudo apt-get install -y containerd kubernetes-cni kubectl ipvsadm ipset
|
||||
```
|
||||
|
||||
Set up `containerd` configuration to enable systemd Cgroups
|
||||
1. Configure the container runtime to use systemd Cgroups. This part is the bit many students miss, and if not done results in a controlplane that comes up, then all the pods start crashlooping. `kubectl` will also fail with an error like `The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?`
|
||||
|
||||
```bash
|
||||
{
|
||||
1. Create default configuration and pipe it through `sed` to correctly set Cgroup parameter.
|
||||
|
||||
```bash
|
||||
{
|
||||
sudo mkdir -p /etc/containerd
|
||||
|
||||
containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
|
||||
}
|
||||
```
|
||||
}
|
||||
```
|
||||
|
||||
Now restart `containerd` to read the new configuration
|
||||
1. Restart containerd
|
||||
|
||||
```bash
|
||||
sudo systemctl restart containerd
|
||||
```
|
||||
```bash
|
||||
sudo systemctl restart containerd
|
||||
```
|
||||
|
||||
|
||||
Prev: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)</br>
|
||||
Next: [Bootstrapping the Kubernetes Worker Nodes](10-bootstrapping-kubernetes-workers.md)
|
||||
Next: [Bootstrapping the Kubernetes Worker Nodes](./10-bootstrapping-kubernetes-workers.md)</br>
|
||||
Prev: [Bootstrapping the Kubernetes Control Plane](./08-bootstrapping-kubernetes-controllers.md)
|
||||
|
|
|
@ -8,8 +8,8 @@ We will now install the kubernetes components
|
|||
|
||||
## Prerequisites
|
||||
|
||||
The Certificates and Configuration are created on `master-1` node and then copied over to workers using `scp`.
|
||||
Once this is done, the commands are to be run on first worker instance: `worker-1`. Login to first worker instance using SSH Terminal.
|
||||
The Certificates and Configuration are created on `controlplane01` node and then copied over to workers using `scp`.
|
||||
Once this is done, the commands are to be run on first worker instance: `node01`. Login to first worker instance using SSH Terminal.
|
||||
|
||||
### Provisioning Kubelet Client Certificates
|
||||
|
||||
|
@ -17,16 +17,16 @@ Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/doc
|
|||
|
||||
Generate a certificate and private key for one worker node:
|
||||
|
||||
On `master-1`:
|
||||
On `controlplane01`:
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
```bash
|
||||
WORKER_1=$(dig +short worker-1)
|
||||
NODE01=$(dig +short node01)
|
||||
```
|
||||
|
||||
```bash
|
||||
cat > openssl-worker-1.cnf <<EOF
|
||||
cat > openssl-node01.cnf <<EOF
|
||||
[req]
|
||||
req_extensions = v3_req
|
||||
distinguished_name = req_distinguished_name
|
||||
|
@ -36,27 +36,27 @@ basicConstraints = CA:FALSE
|
|||
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
DNS.1 = worker-1
|
||||
IP.1 = ${WORKER_1}
|
||||
DNS.1 = node01
|
||||
IP.1 = ${NODE01}
|
||||
EOF
|
||||
|
||||
openssl genrsa -out worker-1.key 2048
|
||||
openssl req -new -key worker-1.key -subj "/CN=system:node:worker-1/O=system:nodes" -out worker-1.csr -config openssl-worker-1.cnf
|
||||
openssl x509 -req -in worker-1.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out worker-1.crt -extensions v3_req -extfile openssl-worker-1.cnf -days 1000
|
||||
openssl genrsa -out node01.key 2048
|
||||
openssl req -new -key node01.key -subj "/CN=system:node:node01/O=system:nodes" -out node01.csr -config openssl-node01.cnf
|
||||
openssl x509 -req -in node01.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out node01.crt -extensions v3_req -extfile openssl-node01.cnf -days 1000
|
||||
```
|
||||
|
||||
Results:
|
||||
|
||||
```
|
||||
worker-1.key
|
||||
worker-1.crt
|
||||
node01.key
|
||||
node01.crt
|
||||
```
|
||||
|
||||
### The kubelet Kubernetes Configuration File
|
||||
|
||||
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
|
||||
|
||||
Get the kub-api server load-balancer IP.
|
||||
Get the kube-api server load-balancer IP.
|
||||
|
||||
```bash
|
||||
LOADBALANCER=$(dig +short loadbalancer)
|
||||
|
@ -64,55 +64,55 @@ LOADBALANCER=$(dig +short loadbalancer)
|
|||
|
||||
Generate a kubeconfig file for the first worker node.
|
||||
|
||||
On `master-1`:
|
||||
On `controlplane01`:
|
||||
```bash
|
||||
{
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=/var/lib/kubernetes/pki/ca.crt \
|
||||
--server=https://${LOADBALANCER}:6443 \
|
||||
--kubeconfig=worker-1.kubeconfig
|
||||
--kubeconfig=node01.kubeconfig
|
||||
|
||||
kubectl config set-credentials system:node:worker-1 \
|
||||
--client-certificate=/var/lib/kubernetes/pki/worker-1.crt \
|
||||
--client-key=/var/lib/kubernetes/pki/worker-1.key \
|
||||
--kubeconfig=worker-1.kubeconfig
|
||||
kubectl config set-credentials system:node:node01 \
|
||||
--client-certificate=/var/lib/kubernetes/pki/node01.crt \
|
||||
--client-key=/var/lib/kubernetes/pki/node01.key \
|
||||
--kubeconfig=node01.kubeconfig
|
||||
|
||||
kubectl config set-context default \
|
||||
--cluster=kubernetes-the-hard-way \
|
||||
--user=system:node:worker-1 \
|
||||
--kubeconfig=worker-1.kubeconfig
|
||||
--user=system:node:node01 \
|
||||
--kubeconfig=node01.kubeconfig
|
||||
|
||||
kubectl config use-context default --kubeconfig=worker-1.kubeconfig
|
||||
kubectl config use-context default --kubeconfig=node01.kubeconfig
|
||||
}
|
||||
```
|
||||
|
||||
Results:
|
||||
|
||||
```
|
||||
worker-1.kubeconfig
|
||||
node01.kubeconfig
|
||||
```
|
||||
|
||||
### Copy certificates, private keys and kubeconfig files to the worker node:
|
||||
On `master-1`:
|
||||
On `controlplane01`:
|
||||
|
||||
```bash
|
||||
scp ca.crt worker-1.crt worker-1.key worker-1.kubeconfig worker-1:~/
|
||||
scp ca.crt node01.crt node01.key node01.kubeconfig node01:~/
|
||||
```
|
||||
|
||||
|
||||
### Download and Install Worker Binaries
|
||||
|
||||
All the following commands from here until the [verification](#verification) step must be run on `worker-1`
|
||||
All the following commands from here until the [verification](#verification) step must be run on `node01`
|
||||
|
||||
[//]: # (host:worker-1)
|
||||
[//]: # (host:node01)
|
||||
|
||||
|
||||
```bash
|
||||
KUBE_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
|
||||
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-proxy \
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kubelet
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kube-proxy \
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kubelet
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/releases/download/#binaries
|
||||
|
@ -138,7 +138,7 @@ Install the worker binaries:
|
|||
|
||||
### Configure the Kubelet
|
||||
|
||||
On worker-1:
|
||||
On `node01`:
|
||||
|
||||
Copy keys and config to correct directories and secure
|
||||
|
||||
|
@ -214,6 +214,7 @@ Requires=containerd.service
|
|||
ExecStart=/usr/local/bin/kubelet \\
|
||||
--config=/var/lib/kubelet/kubelet-config.yaml \\
|
||||
--kubeconfig=/var/lib/kubelet/kubelet.kubeconfig \\
|
||||
--node-ip=${PRIMARY_IP} \\
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
@ -225,7 +226,7 @@ EOF
|
|||
|
||||
### Configure the Kubernetes Proxy
|
||||
|
||||
On worker-1:
|
||||
On `node01`:
|
||||
|
||||
```bash
|
||||
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/
|
||||
|
@ -241,7 +242,7 @@ kind: KubeProxyConfiguration
|
|||
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
||||
clientConnection:
|
||||
kubeconfig: /var/lib/kube-proxy/kube-proxy.kubeconfig
|
||||
mode: ipvs
|
||||
mode: iptables
|
||||
clusterCIDR: ${POD_CIDR}
|
||||
EOF
|
||||
```
|
||||
|
@ -267,7 +268,7 @@ EOF
|
|||
|
||||
## Optional - Check Certificates and kubeconfigs
|
||||
|
||||
At `worker-1` node, run the following, selecting option 4
|
||||
At `node01` node, run the following, selecting option 4
|
||||
|
||||
[//]: # (command:./cert_verify.sh 4)
|
||||
|
||||
|
@ -278,7 +279,8 @@ At `worker-1` node, run the following, selecting option 4
|
|||
|
||||
### Start the Worker Services
|
||||
|
||||
On worker-1:
|
||||
On `node01`:
|
||||
|
||||
```bash
|
||||
{
|
||||
sudo systemctl daemon-reload
|
||||
|
@ -287,28 +289,28 @@ On worker-1:
|
|||
}
|
||||
```
|
||||
|
||||
> Remember to run the above commands on worker node: `worker-1`
|
||||
> Remember to run the above commands on worker node: `node01`
|
||||
|
||||
## Verification
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
Now return to the `master-1` node.
|
||||
Now return to the `controlplane01` node.
|
||||
|
||||
List the registered Kubernetes nodes from the master node:
|
||||
List the registered Kubernetes nodes from the controlplane node:
|
||||
|
||||
```bash
|
||||
kubectl get nodes --kubeconfig admin.kubeconfig
|
||||
```
|
||||
|
||||
> output
|
||||
Output will be similar to
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
worker-1 NotReady <none> 93s v1.28.4
|
||||
node01 NotReady <none> 93s v1.28.4
|
||||
```
|
||||
|
||||
The node is not ready as we have not yet installed pod networking. This comes later.
|
||||
|
||||
Prev: [Installing CRI on the Kubernetes Worker Nodes](09-install-cri-workers.md)<br>
|
||||
Next: [TLS Bootstrapping Kubernetes Workers](11-tls-bootstrapping-kubernetes-workers.md)
|
||||
Next: [TLS Bootstrapping Kubernetes Workers](./11-tls-bootstrapping-kubernetes-workers.md)<br>
|
||||
Prev: [Installing CRI on the Kubernetes Worker Nodes](./09-install-cri-workers.md)
|
||||
|
|
|
@ -6,7 +6,7 @@ In the previous step we configured a worker node by
|
|||
- Creating a kube-config file using this certificate by ourself
|
||||
- Everytime the certificate expires we must follow the same process of updating the certificate by ourself
|
||||
|
||||
This is not a practical approach when you have 1000s of nodes in the cluster, and nodes dynamically being added and removed from the cluster. With TLS boostrapping:
|
||||
This is not a practical approach when you could have 1000s of nodes in the cluster, and nodes dynamically being added and removed from the cluster. With TLS boostrapping:
|
||||
|
||||
- The Nodes can generate certificate key pairs by themselves
|
||||
- The Nodes can generate certificate signing request by themselves
|
||||
|
@ -41,11 +41,11 @@ So let's get started!
|
|||
|
||||
> Note: We have already configured these in lab 8 in this course
|
||||
|
||||
# Step 1 Create the Boostrap Token to be used by Nodes(Kubelets) to invoke Certificate API
|
||||
# Step 1 Create the Boostrap Token to be used by Nodes (Kubelets) to invoke Certificate API
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
Run the following steps on `master-1`
|
||||
Run the following steps on `controlplane01`
|
||||
|
||||
For the workers(kubelet) to access the Certificates API, they need to authenticate to the kubernetes api-server first. For this we create a [Bootstrap Token](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) to be used by the kubelet
|
||||
|
||||
|
@ -100,7 +100,7 @@ Once this is created the token to be used for authentication is `07401b.f395accd
|
|||
|
||||
Reference: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/#bootstrap-token-secret-format
|
||||
|
||||
## Step 2 Authorize workers(kubelets) to create CSR
|
||||
## Step 2 Authorize nodes (kubelets) to create CSR
|
||||
|
||||
Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet
|
||||
|
||||
|
@ -135,7 +135,7 @@ kubectl create -f csrs-for-bootstrapping.yaml --kubeconfig admin.kubeconfig
|
|||
```
|
||||
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#authorize-kubelet-to-create-csr
|
||||
|
||||
## Step 3 Authorize workers(kubelets) to approve CSRs
|
||||
## Step 3 Authorize nodes (kubelets) to approve CSRs
|
||||
|
||||
```bash
|
||||
kubectl create clusterrolebinding auto-approve-csrs-for-group \
|
||||
|
@ -168,7 +168,7 @@ kubectl create -f auto-approve-csrs-for-group.yaml --kubeconfig admin.kubeconfig
|
|||
|
||||
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#approval
|
||||
|
||||
## Step 4 Authorize workers(kubelets) to Auto Renew Certificates on expiration
|
||||
## Step 4 Authorize nodes (kubelets) to Auto Renew Certificates on expiration
|
||||
|
||||
We now create the Cluster Role Binding required for the nodes to automatically renew the certificates on expiry. Note that we are NOT using the **system:bootstrappers** group here any more. Since by the renewal period, we believe the node would be bootstrapped and part of the cluster already. All nodes are part of the **system:nodes** group.
|
||||
|
||||
|
@ -206,9 +206,9 @@ Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-b
|
|||
|
||||
## Step 5 Configure the Binaries on the Worker node
|
||||
|
||||
Going forward all activities are to be done on the `worker-2` node until [step 11](#step-11-approve-server-csr).
|
||||
Going forward all activities are to be done on the `node02` node until [step 11](#step-11-approve-server-csr).
|
||||
|
||||
[//]: # (host:worker-2)
|
||||
[//]: # (host:node02)
|
||||
|
||||
### Download and Install Worker Binaries
|
||||
|
||||
|
@ -218,8 +218,8 @@ Note that kubectl is required here to assist with creating the boostrap kubeconf
|
|||
KUBE_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
|
||||
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-proxy \
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kubelet
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kube-proxy \
|
||||
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/${ARCH}/kubelet
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/releases/download/#binaries
|
||||
|
@ -256,10 +256,10 @@ Move the certificates and secure them.
|
|||
|
||||
It is now time to configure the second worker to TLS bootstrap using the token we generated
|
||||
|
||||
For worker-1 we started by creating a kubeconfig file with the TLS certificates that we manually generated.
|
||||
For `node01` we started by creating a kubeconfig file with the TLS certificates that we manually generated.
|
||||
Here, we don't have the certificates yet. So we cannot create a kubeconfig file. Instead we create a bootstrap-kubeconfig file with information about the token we created.
|
||||
|
||||
This is to be done on the `worker-2` node. Note that now we have set up the load balancer to provide high availibilty across the API servers, we point kubelet to the load balancer.
|
||||
This is to be done on the `node02` node. Note that now we have set up the load balancer to provide high availibilty across the API servers, we point kubelet to the load balancer.
|
||||
|
||||
Set up some shell variables for nodes and services we will require in the following configurations:
|
||||
|
||||
|
@ -367,6 +367,7 @@ ExecStart=/usr/local/bin/kubelet \\
|
|||
--config=/var/lib/kubelet/kubelet-config.yaml \\
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
||||
--cert-dir=/var/lib/kubelet/pki/ \\
|
||||
--node-ip=${PRIMARY_IP} \\
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
|
@ -404,7 +405,7 @@ kind: KubeProxyConfiguration
|
|||
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
||||
clientConnection:
|
||||
kubeconfig: /var/lib/kube-proxy/kube-proxy.kubeconfig
|
||||
mode: ipvs
|
||||
mode: iptables
|
||||
clusterCIDR: ${POD_CIDR}
|
||||
EOF
|
||||
```
|
||||
|
@ -431,7 +432,7 @@ EOF
|
|||
|
||||
## Step 10 Start the Worker Services
|
||||
|
||||
On worker-2:
|
||||
On `node02`:
|
||||
|
||||
```bash
|
||||
{
|
||||
|
@ -440,11 +441,11 @@ On worker-2:
|
|||
sudo systemctl start kubelet kube-proxy
|
||||
}
|
||||
```
|
||||
> Remember to run the above commands on worker node: `worker-2`
|
||||
> Remember to run the above commands on worker node: `node02`
|
||||
|
||||
### Optional - Check Certificates and kubeconfigs
|
||||
|
||||
At `worker-2` node, run the following, selecting option 5
|
||||
At `node02` node, run the following, selecting option 5
|
||||
|
||||
[//]: # (command:sleep 5)
|
||||
[//]: # (command:./cert_verify.sh 5)
|
||||
|
@ -456,11 +457,11 @@ At `worker-2` node, run the following, selecting option 5
|
|||
|
||||
## Step 11 Approve Server CSR
|
||||
|
||||
Now, go back to `master-1` and approve the pending kubelet-serving certificate
|
||||
Now, go back to `controlplane01` and approve the pending kubelet-serving certificate
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
[//]: # (command:sudo apt install -y jq)
|
||||
[//]: # (command:kubectl certificate approve --kubeconfig admin.kubeconfig $(kubectl get csr --kubeconfig admin.kubeconfig -o json | jq -r '.items | .[] | select(.spec.username == "system:node:worker-2") | .metadata.name'))
|
||||
[//]: # (command:kubectl certificate approve --kubeconfig admin.kubeconfig $(kubectl get csr --kubeconfig admin.kubeconfig -o json | jq -r '.items | .[] | select(.spec.username == "system:node:node02") | .metadata.name'))
|
||||
|
||||
```bash
|
||||
kubectl get csr --kubeconfig admin.kubeconfig
|
||||
|
@ -470,7 +471,7 @@ kubectl get csr --kubeconfig admin.kubeconfig
|
|||
|
||||
```
|
||||
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
|
||||
csr-7k8nh 85s kubernetes.io/kubelet-serving system:node:worker-2 <none> Pending
|
||||
csr-7k8nh 85s kubernetes.io/kubelet-serving system:node:node02 <none> Pending
|
||||
csr-n7z8p 98s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:07401b <none> Approved,Issued
|
||||
```
|
||||
|
||||
|
@ -487,19 +488,21 @@ Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-b
|
|||
|
||||
## Verification
|
||||
|
||||
List the registered Kubernetes nodes from the master node:
|
||||
List the registered Kubernetes nodes from the controlplane node:
|
||||
|
||||
```bash
|
||||
kubectl get nodes --kubeconfig admin.kubeconfig
|
||||
```
|
||||
|
||||
> output
|
||||
Output will be similar to
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
worker-1 NotReady <none> 93s v1.28.4
|
||||
worker-2 NotReady <none> 93s v1.28.4
|
||||
node01 NotReady <none> 93s v1.28.4
|
||||
node02 NotReady <none> 93s v1.28.4
|
||||
```
|
||||
|
||||
Prev: [Bootstrapping the Kubernetes Worker Nodes](10-bootstrapping-kubernetes-workers.md)</br>
|
||||
Next: [Configuring Kubectl](12-configuring-kubectl.md)
|
||||
Nodes are still not yet ready. As previously mentioned, this is expected.
|
||||
|
||||
Next: [Configuring Kubectl](./12-configuring-kubectl.md)</br>
|
||||
Prev: [Bootstrapping the Kubernetes Worker Nodes](./10-bootstrapping-kubernetes-workers.md)
|
||||
|
|
|
@ -8,9 +8,9 @@ In this lab you will generate a kubeconfig file for the `kubectl` command line u
|
|||
|
||||
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
On `master-1`
|
||||
On `controlplane01`
|
||||
|
||||
Get the kube-api server load-balancer IP.
|
||||
|
||||
|
@ -50,7 +50,7 @@ Check the health of the remote Kubernetes cluster:
|
|||
kubectl get componentstatuses
|
||||
```
|
||||
|
||||
> output
|
||||
Output will be similar to this. It may or may not list both etcd instances, however this is OK if you verified correct installation of etcd in lab 7.
|
||||
|
||||
```
|
||||
Warning: v1 ComponentStatus is deprecated in v1.19+
|
||||
|
@ -71,9 +71,9 @@ kubectl get nodes
|
|||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
worker-1 NotReady <none> 118s v1.28.4
|
||||
worker-2 NotReady <none> 118s v1.28.4
|
||||
node01 NotReady <none> 118s v1.28.4
|
||||
node02 NotReady <none> 118s v1.28.4
|
||||
```
|
||||
|
||||
Prev: [TLS Bootstrapping Kubernetes Workers](11-tls-bootstrapping-kubernetes-workers.md)</br>
|
||||
Next: [Deploy Pod Networking](13-configure-pod-networking.md)
|
||||
Next: [Deploy Pod Networking](./13-configure-pod-networking.md)</br>
|
||||
Prev: [TLS Bootstrapping Kubernetes Workers](./11-tls-bootstrapping-kubernetes-workers.md)
|
||||
|
|
|
@ -7,30 +7,32 @@ We chose to use CNI - [weave](https://www.weave.works/docs/net/latest/kubernetes
|
|||
|
||||
### Deploy Weave Network
|
||||
|
||||
Deploy weave network. Run only once on the `master-1` node. You will see a warning, but this is OK.
|
||||
Some of you may have noticed the announcement that WeaveWorks is no longer trading. At this time, this does not mean that Weave is not a valid CNI. WeaveWorks software has always been and remains to be open source, and as such is still useable. It just means that the company is no longer providing updates. While it continues to be compatible with Kubernetes, we will continue to use it as the other options (e.g. Calico, Cilium) require far more configuration steps.
|
||||
|
||||
[//]: # (host:master-1)
|
||||
Deploy weave network. Run only once on the `controlplane01` node. You may see a warning, but this is OK.
|
||||
|
||||
On `master-1`
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
On `controlplane01`
|
||||
|
||||
```bash
|
||||
kubectl apply -f "https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s-1.11.yaml"
|
||||
|
||||
```
|
||||
|
||||
Weave uses POD CIDR of `10.244.0.0/16` by default.
|
||||
It may take up to 60 seconds for the Weave pods to be ready.
|
||||
|
||||
## Verification
|
||||
|
||||
[//]: # (command:kubectl rollout status daemonset weave-net -n kube-system --timeout=90s)
|
||||
|
||||
List the registered Kubernetes nodes from the master node:
|
||||
List the registered Kubernetes nodes from the controlplane node:
|
||||
|
||||
```bash
|
||||
kubectl get pods -n kube-system
|
||||
```
|
||||
|
||||
> output
|
||||
Output will be similar to
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
|
@ -38,21 +40,21 @@ weave-net-58j2j 2/2 Running 0 89s
|
|||
weave-net-rr5dk 2/2 Running 0 89s
|
||||
```
|
||||
|
||||
Once the Weave pods are fully running which might take up to 60 seconds, the nodes should be ready
|
||||
Once the Weave pods are fully running, the nodes should be ready.
|
||||
|
||||
```bash
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
> Output
|
||||
Output will be similar to
|
||||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
worker-1 Ready <none> 4m11s v1.28.4
|
||||
worker-2 Ready <none> 2m49s v1.28.4
|
||||
node01 Ready <none> 4m11s v1.28.4
|
||||
node02 Ready <none> 2m49s v1.28.4
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/docs/tasks/administer-cluster/network-policy-provider/weave-network-policy/#install-the-weave-net-addon
|
||||
|
||||
Prev: [Configuring Kubectl](12-configuring-kubectl.md)</br>
|
||||
Next: [Kube API Server to Kubelet Connectivity](14-kube-apiserver-to-kubelet.md)
|
||||
Next: [Kube API Server to Kubelet Connectivity](./14-kube-apiserver-to-kubelet.md)</br>
|
||||
Prev: [Configuring Kubectl](./12-configuring-kubectl.md)
|
||||
|
|
|
@ -4,9 +4,9 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
|
|||
|
||||
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
Run the below on the `master-1` node.
|
||||
Run the below on the `controlplane01` node.
|
||||
|
||||
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
|
||||
|
||||
|
@ -58,5 +58,5 @@ EOF
|
|||
```
|
||||
Reference: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
|
||||
|
||||
Prev: [Deploy Pod Networking](13-configure-pod-networking.md)</br>
|
||||
Next: [DNS Addon](15-dns-addon.md)
|
||||
Next: [DNS Addon](./15-dns-addon.md)</br>
|
||||
Prev: [Deploy Pod Networking](./13-configure-pod-networking.md)
|
||||
|
|
|
@ -4,11 +4,11 @@ In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts
|
|||
|
||||
## The DNS Cluster Add-on
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
Deploy the `coredns` cluster add-on:
|
||||
|
||||
Note that if you have [changed the service CIDR range](./01-prerequisites.md#service-network) and thus this file, you will need to save your copy onto `master-1` (paste to vi, then save) and apply that.
|
||||
Note that if you have [changed the service CIDR range](./01-prerequisites.md#service-network) and thus this file, you will need to save your copy onto `controlplane01` (paste to vi, then save) and apply that.
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/mmumshad/kubernetes-the-hard-way/master/deployments/coredns.yaml
|
||||
|
@ -83,5 +83,5 @@ Name: kubernetes
|
|||
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
|
||||
```
|
||||
|
||||
Prev: [Kube API Server to Kubelet Connectivity](14-kube-apiserver-to-kubelet.md)</br>
|
||||
Next: [Smoke Test](16-smoke-test.md)
|
||||
Next: [Smoke Test](./16-smoke-test.md)</br>
|
||||
Prev: [Kube API Server to Kubelet Connectivity](./14-kube-apiserver-to-kubelet.md)
|
||||
|
|
|
@ -4,7 +4,7 @@ In this lab you will complete a series of tasks to ensure your Kubernetes cluste
|
|||
|
||||
## Data Encryption
|
||||
|
||||
[//]: # (host:master-1)
|
||||
[//]: # (host:controlplane01)
|
||||
|
||||
In this section you will verify the ability to [encrypt secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted).
|
||||
|
||||
|
@ -61,7 +61,7 @@ In this section you will verify the ability to create and manage [Deployments](h
|
|||
Create a deployment for the [nginx](https://nginx.org/en/) web server:
|
||||
|
||||
```bash
|
||||
kubectl create deployment nginx --image=nginx:1.23.1
|
||||
kubectl create deployment nginx --image=nginx:alpine
|
||||
```
|
||||
|
||||
[//]: # (command:kubectl wait deployment -n default nginx --for condition=Available=True --timeout=90s)
|
||||
|
@ -89,6 +89,7 @@ Create a service to expose deployment nginx on node ports.
|
|||
kubectl expose deploy nginx --type=NodePort --port 80
|
||||
```
|
||||
|
||||
[//]: # (command:sleep 2)
|
||||
|
||||
```bash
|
||||
PORT_NUMBER=$(kubectl get svc -l app=nginx -o jsonpath="{.items[0].spec.ports[0].nodePort}")
|
||||
|
@ -97,8 +98,8 @@ PORT_NUMBER=$(kubectl get svc -l app=nginx -o jsonpath="{.items[0].spec.ports[0]
|
|||
Test to view NGINX page
|
||||
|
||||
```bash
|
||||
curl http://worker-1:$PORT_NUMBER
|
||||
curl http://worker-2:$PORT_NUMBER
|
||||
curl http://node01:$PORT_NUMBER
|
||||
curl http://node02:$PORT_NUMBER
|
||||
```
|
||||
|
||||
> output
|
||||
|
@ -160,5 +161,5 @@ kubectl delete service -n default nginx
|
|||
kubectl delete deployment -n default nginx
|
||||
```
|
||||
|
||||
Prev: [DNS Addon](15-dns-addon.md)</br>
|
||||
Next: [End to End Tests](17-e2e-tests.md)
|
||||
Next: [End to End Tests](./17-e2e-tests.md)</br>
|
||||
Prev: [DNS Addon](./15-dns-addon.md)
|
||||
|
|
|
@ -1,17 +1,21 @@
|
|||
# Run End-to-End Tests
|
||||
|
||||
Optional Lab.
|
||||
|
||||
Observations by Alistair (KodeKloud):
|
||||
|
||||
Depending on your computer, you may have varying success with these. I have found them to run much more smoothly on a 12 core Intel(R) Core(TM) i7-7800X Desktop Processor (circa 2017), than on a 20 core Intel(R) Core(TM) i7-12700H Laptop processor (circa 2022) - both machines having 32GB RAM and both machines running the same version of VirtualBox. On the latter, it tends to destabilize the cluster resulting in timeouts in the tests. This *may* be a processor issue in that laptop processors are not really designed to take the kind of abuse that'll be thrown by the tests at a kube cluster that really should be run on a Server processor. Laptop processors do odd things for power conservation like constantly varying the clock speed and mixing "performance" and "efficiency" cores, even when the laptop is plugged in, and this could be causing synchronization issues with the goroutines running in the kube components. If anyone has a definitive explanation for this, please do post in the kubernetes-the-hard-way Slack channel.
|
||||
Depending on your computer, you may have varying success with these. I have found them to run much more smoothly on a 12 core Intel(R) Core(TM) i7-7800X Desktop Processor (circa 2017), than on a 20 core Intel(R) Core(TM) i7-12700H Laptop processor (circa 2022) - both machines having 32GB RAM and both machines running the same version of VirtualBox. On the latter, it tends to destabilize the cluster resulting in timeouts in the tests. This *may* be a processor issue in that laptop processors are not really designed to take the kind of abuse that'll be thrown by the tests at a kube cluster that really should be run on a Server processor. Laptop processors do odd things for power conservation like constantly varying the clock speed and mixing "performance" and "efficiency" cores, even when the laptop is plugged in, and this could be causing synchronization issues with the goroutines running in the kube components. If anyone has a definitive explanation for this, please do post in the Kubernetes section of the [Community Forum](https://kodekloud.com/community/c/kubernetes/6).
|
||||
|
||||
|
||||
Test suite should be installed to and run from `controlplane01`
|
||||
|
||||
## Install latest Go
|
||||
|
||||
```bash
|
||||
GO_VERSION=$(curl -s 'https://go.dev/VERSION?m=text' | head -1)
|
||||
wget "https://dl.google.com/go/${GO_VERSION}.linux-amd64.tar.gz"
|
||||
wget "https://dl.google.com/go/${GO_VERSION}.linux-${ARCH}.tar.gz"
|
||||
|
||||
sudo tar -C /usr/local -xzf ${GO_VERSION}.linux-amd64.tar.gz
|
||||
sudo tar -C /usr/local -xzf ${GO_VERSION}.linux-${ARCH}.tar.gz
|
||||
|
||||
sudo ln -s /usr/local/go/bin/go /usr/local/bin/go
|
||||
sudo ln -s /usr/local/go/bin/gofmt /usr/local/bin/gofmt
|
||||
|
@ -32,7 +36,7 @@ sudo snap install google-cloud-cli --classic
|
|||
|
||||
## Run test
|
||||
|
||||
Here we set up a couple of environment variables to supply arguments to the test package - the version of our cluster and the number of CPUs on `master-1` to aid with test parallelization.
|
||||
Here we set up a couple of environment variables to supply arguments to the test package - the version of our cluster and the number of CPUs on `controlplane01` to aid with test parallelization.
|
||||
|
||||
Then we invoke the test package
|
||||
|
||||
|
@ -42,25 +46,19 @@ NUM_CPU=$(cat /proc/cpuinfo | grep '^processor' | wc -l)
|
|||
|
||||
cd ~
|
||||
kubetest2 noop --kubeconfig ${PWD}/.kube/config --test=ginkgo -- \
|
||||
--focus-regex='\[Conformance\]' --test-package-version $KUBE_VERSION --logtostderr --parallel $NUM_CPU
|
||||
--focus-regex='\[Conformance\]' --test-package-version $KUBE_VERSION --parallel $NUM_CPU
|
||||
```
|
||||
|
||||
While this is running, you can open an additional session on `master-1` from your workstation and watch the activity in the cluster
|
||||
|
||||
```
|
||||
vagrant ssh master-1
|
||||
```
|
||||
|
||||
then
|
||||
While this is running, you can open an additional session on `controlplane01` from your workstation and watch the activity in the cluster -
|
||||
|
||||
```
|
||||
watch kubectl get all -A
|
||||
```
|
||||
|
||||
Observations by Alistair (KodeKloud):
|
||||
Further observations by Alistair (KodeKloud):
|
||||
|
||||
This should take up to an hour to run. The number of tests run and passed will be displayed at the end. Expect some failures!
|
||||
This could take between an hour and several hours to run depending on your system. The number of tests run and passed will be displayed at the end. Expect some failures!
|
||||
|
||||
I am not able to say exactly why the failed tests fail. It would take days to go though the truly enormous test code base to determine why the tests that fail do so.
|
||||
I am not able to say exactly why the failed tests fail over and above the assumptions above. It would take days to go though the truly enormous test code base to determine why the tests that fail do so.
|
||||
|
||||
Prev: [Smoke Test](16-smoke-test.md)
|
||||
Prev: [Smoke Test](./16-smoke-test.md)
|
|
@ -1,9 +1,9 @@
|
|||
# Differences between original and this solution
|
||||
|
||||
* Platform: I use VirtualBox to setup a local cluster, the original one uses GCP.
|
||||
* Nodes: 2 master and 2 worker vs 2 master and 3 worker nodes.
|
||||
* Nodes: 2 controlplane and 2 worker vs 2 controlplane and 3 worker nodes.
|
||||
* Configure 1 worker node normally and the second one with TLS bootstrap.
|
||||
* Node names: I use worker-1 worker-2 instead of worker-0 worker-1.
|
||||
* Node names: I use node01 node02 instead of worker-0 worker-1.
|
||||
* IP Addresses: I use statically assigned IPs on private network.
|
||||
* Certificate file names: I use \<name\>.crt for public certificate and \<name\>.key for private key file. Whereas original one uses \<name\>-.pem for certificate and \<name\>-key.pem for private key.
|
||||
* I generate separate certificates for etcd-server instead of using kube-apiserver.
|
||||
|
|
|
@ -1,10 +1,10 @@
|
|||
# Verify Certificates in Master-1/2 & Worker-1
|
||||
# Verify Certificates in controlplane-1/2 & Worker-1
|
||||
|
||||
> Note: This script is only intended to work with a kubernetes cluster setup following instructions from this repository. It is not a generic script that works for all kubernetes clusters. Feel free to send in PRs with improvements.
|
||||
|
||||
This script was developed to assist the verification of certificates for each Kubernetes component as part of building the cluster. This script may be executed as soon as you have completed the Lab steps up to [Bootstrapping the Kubernetes Worker Nodes](./09-bootstrapping-kubernetes-workers.md). The script is named as `cert_verify.sh` and it is available at `/home/vagrant` directory of master-1 , master-2 and worker-1 nodes. If it's not already available there copy the script to the nodes from [here](../vagrant/ubuntu/cert_verify.sh).
|
||||
This script was developed to assist the verification of certificates for each Kubernetes component as part of building the cluster. This script may be executed as soon as you have completed the Lab steps up to [Bootstrapping the Kubernetes Worker Nodes](./09-bootstrapping-kubernetes-workers.md). The script is named as `cert_verify.sh` and it is available at `/home/vagrant` directory of controlplane01 , controlplane02 and node01 nodes. If it's not already available there copy the script to the nodes from [here](../vagrant/ubuntu/cert_verify.sh).
|
||||
|
||||
It is important that the script execution needs to be done by following commands after logging into the respective virtual machines [ whether it is master-1 / master-2 / worker-1 ] via SSH.
|
||||
It is important that the script execution needs to be done by following commands after logging into the respective virtual machines [ whether it is controlplane01 / controlplane02 / node01 ] via SSH.
|
||||
|
||||
```bash
|
||||
cd ~
|
||||
|
|
After Width: | Height: | Size: 134 KiB |
Before Width: | Height: | Size: 100 KiB After Width: | Height: | Size: 100 KiB |
Before Width: | Height: | Size: 75 KiB After Width: | Height: | Size: 75 KiB |
Before Width: | Height: | Size: 116 KiB After Width: | Height: | Size: 116 KiB |
Before Width: | Height: | Size: 44 KiB After Width: | Height: | Size: 44 KiB |
|
@ -1,56 +0,0 @@
|
|||
# certified-kubernetes-administrator-course-answers
|
||||
Practice question answers for Certified Kubernetes Administrator course
|
||||
|
||||
This repository contains answers for the practice tests hosted on the course [Certified Kubernetes Administrators Course](https://kodekloud.com/p/certified-kubernetes-administrator-with-practice-tests)
|
||||
|
||||
| Section | Test |
|
||||
|----------------------------------|------------------------------------|
|
||||
| Core Concepts | Practice Test Introduction |
|
||||
| Core Concepts | ReplicaSets |
|
||||
| Core Concepts | Deployments |
|
||||
| Core Concepts | Namespaces |
|
||||
| Core Concepts | Services Cluster IP |
|
||||
| Scheduling | Manual Scheduling |
|
||||
| Scheduling | Labels and Selectors |
|
||||
| Scheduling | Resource Limits |
|
||||
| Scheduling | DaemonSets |
|
||||
| Scheduling | Multiple Schedulers |
|
||||
| Logging & Monitoring | Monitor Cluster Components |
|
||||
| Logging & Monitoring | Managing Application Logs |
|
||||
| Application Lifecycle Management | Rolling Updates and Rollbacks |
|
||||
| Application Lifecycle Management | Commands and Arguments |
|
||||
| Application Lifecycle Management | ConfigMaps |
|
||||
| Application Lifecycle Management | Secrets |
|
||||
| Application Lifecycle Management | Liveness Probes |
|
||||
| Cluster Maintenance | OS Upgrades |
|
||||
| Cluster Maintenance | Cluster Upgrade Process |
|
||||
| Cluster Maintenance | [Backup ETCD](/cluster-maintenance-backup-etcd) |
|
||||
| Security | View Certificate Details |
|
||||
| Security | Certificates API |
|
||||
| Security | KubeConfig |
|
||||
| Security | Role Based Access Controls |
|
||||
| Security | Cluster Roles |
|
||||
| Security | Image Security |
|
||||
| Security | Security Contexts |
|
||||
| Security | Network Policies |
|
||||
| Storage | Persistent Volume Claims |
|
||||
| Networking | CNI in kubernetes |
|
||||
| Networking | CNI weave |
|
||||
| Networking | CNI Weave Read |
|
||||
| Networking | CNI Deploy Weave |
|
||||
| Networking | Service Networking |
|
||||
| Networking | CoreDNS in Kubernetes |
|
||||
| Install | Bootstrap worker node |
|
||||
| Install | Bootstrap worker node - 2 |
|
||||
| Install | End to End Tests - Run and Analyze |
|
||||
| Troubleshooting | Application Failure |
|
||||
| Troubleshooting | Control Plane Failure |
|
||||
| Troubleshooting | Worker Node Failure |
|
||||
|
||||
|
||||
# Contributing Guide
|
||||
|
||||
1. The folder structure for all topics and associated practice tests are created already. Use the same pattern to create one if it doesn't exist.
|
||||
2. Create a file with your answers. If you have a different answer than the one that is already there, create a new answer file with your name in it.
|
||||
4. Do not post the entire question. Only post the question number.
|
||||
3. Send in a pull request
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1,79 +0,0 @@
|
|||
|
||||
|
||||
# 1. Get etcdctl utility if it's not already present.
|
||||
|
||||
Reference: https://github.com/etcd-io/etcd/releases
|
||||
|
||||
```
|
||||
ETCD_VER=v3.4.9
|
||||
|
||||
# choose either URL
|
||||
GOOGLE_URL=https://storage.googleapis.com/etcd
|
||||
GITHUB_URL=https://github.com/etcd-io/etcd/releases/download
|
||||
DOWNLOAD_URL=${GOOGLE_URL}
|
||||
|
||||
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
|
||||
rm -rf /tmp/etcd-download-test && mkdir -p /tmp/etcd-download-test
|
||||
|
||||
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
|
||||
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download-test --strip-components=1
|
||||
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
|
||||
|
||||
/tmp/etcd-download-test/etcd --version
|
||||
ETCDCTL_API=3 /tmp/etcd-download-test/etcdctl version
|
||||
|
||||
mv /tmp/etcd-download-test/etcdctl /usr/bin
|
||||
```
|
||||
|
||||
# 2. Backup
|
||||
|
||||
```
|
||||
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
|
||||
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
|
||||
snapshot save /opt/snapshot-pre-boot.db
|
||||
```
|
||||
|
||||
Note: In this case, the **ETCD** is running on the same server where we are running the commands (which is the *controlplane* node). As a result, the **--endpoint** argument is optional and can be ignored.
|
||||
|
||||
The options **--cert, --cacert and --key** are mandatory to authenticate to the ETCD server to take the backup.
|
||||
|
||||
If you want to take a backup of the ETCD service running on a different machine, you will have to provide the correct endpoint to that server (which is the IP Address and port of the etcd server with the **--endpoint** argument)
|
||||
|
||||
# -----------------------------
|
||||
# Disaster Happens
|
||||
# -----------------------------
|
||||
|
||||
# 3. Restore ETCD Snapshot to a new folder
|
||||
|
||||
```
|
||||
ETCDCTL_API=3 etcdctl --data-dir /var/lib/etcd-from-backup \
|
||||
snapshot restore /opt/snapshot-pre-boot.db
|
||||
```
|
||||
|
||||
Note: In this case, we are restoring the snapshot to a different directory but in the same server where we took the backup (**the controlplane node)**
|
||||
As a result, the only required option for the restore command is the **--data-dir**.
|
||||
|
||||
# 4. Modify /etc/kubernetes/manifests/etcd.yaml
|
||||
|
||||
We have now restored the etcd snapshot to a new path on the controlplane - **/var/lib/etcd-from-backup**, so, the only change to be made in the YAML file, is to change the hostPath for the volume called **etcd-data** from old directory (/var/lib/etcd) to the new directory **/var/lib/etcd-from-backup**.
|
||||
|
||||
```
|
||||
volumes:
|
||||
- hostPath:
|
||||
path: /var/lib/etcd-from-backup
|
||||
type: DirectoryOrCreate
|
||||
name: etcd-data
|
||||
```
|
||||
With this change, /var/lib/etcd on the **container** points to /var/lib/etcd-from-backup on the **controlplane** (which is what we want)
|
||||
|
||||
|
||||
When this file is updated, the ETCD pod is automatically re-created as this is a static pod placed under the `/etc/kubernetes/manifests` directory.
|
||||
|
||||
|
||||
> Note: as the ETCD pod has changed it will automatically restart, and also kube-controller-manager and kube-scheduler. Wait 1-2 to mins for this pods to restart. You can make a `watch "docker ps | grep etcd"` to see when the ETCD pod is restarted.
|
||||
|
||||
> Note2: If the etcd pod is not getting `Ready 1/1`, then restart it by `kubectl delete pod -n kube-system etcd-controlplane` and wait 1 minute.
|
||||
|
||||
> Note3: This is the simplest way to make sure that ETCD uses the restored data after the ETCD pod is recreated. You **don't** have to change anything else.
|
||||
|
||||
**If** you do change **--data-dir** to **/var/lib/etcd-from-backup** in the YAML file, make sure that the **volumeMounts** for **etcd-data** is updated as well, with the mountPath pointing to /var/lib/etcd-from-backup (**THIS COMPLETE STEP IS OPTIONAL AND NEED NOT BE DONE FOR COMPLETING THE RESTORE**)
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1,182 +0,0 @@
|
|||
## Create Bootstrap Token on Master Node
|
||||
|
||||
This is the solution to the practice test on TLS Bootstrapping hosted [here](https://kodekloud.com/courses/certified-kubernetes-administrator-with-practice-tests/lectures/9833234)
|
||||
|
||||
```
|
||||
cat > bootstrap-token-09426c.yaml <<EOF
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
# Name MUST be of form "bootstrap-token-<token id>"
|
||||
name: bootstrap-token-09426c
|
||||
namespace: kube-system
|
||||
|
||||
# Type MUST be 'bootstrap.kubernetes.io/token'
|
||||
type: bootstrap.kubernetes.io/token
|
||||
stringData:
|
||||
# Human readable description. Optional.
|
||||
description: "The default bootstrap token generated by 'kubeadm init'."
|
||||
|
||||
# Token ID and secret. Required.
|
||||
token-id: 09426c
|
||||
token-secret: g262dkeidk3dx21x
|
||||
|
||||
# Expiration. Optional.
|
||||
expiration: 2020-03-10T03:22:11Z
|
||||
|
||||
# Allowed usages.
|
||||
usage-bootstrap-authentication: "true"
|
||||
usage-bootstrap-signing: "true"
|
||||
|
||||
# Extra groups to authenticate the token as. Must start with "system:bootstrappers:"
|
||||
auth-extra-groups: system:bootstrappers:node03
|
||||
EOF
|
||||
```
|
||||
|
||||
`master$ kubectl create -f bootstrap-token-09426c.yaml`
|
||||
|
||||
## Create Cluster Role Binding
|
||||
|
||||
```
|
||||
kubectl create clusterrolebinding crb-to-create-csr --clusterrole=system:node-bootstrapper --group=system:bootstrappers
|
||||
```
|
||||
|
||||
--------------- OR ---------------
|
||||
|
||||
```
|
||||
cat > crb-to-create-csr <<-EOF
|
||||
# enable bootstrapping nodes to create CSR
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: crb-to-create-csr
|
||||
subjects:
|
||||
- kind: Group
|
||||
name: system:bootstrappers
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: system:node-bootstrapper
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
EOF
|
||||
```
|
||||
|
||||
`master$ kubectl create -f crb-to-create-csr.yaml`
|
||||
|
||||
|
||||
# Authorize workers(kubelets) to approve CSR
|
||||
|
||||
```
|
||||
kubectl create clusterrolebinding crb-to-approve-csr --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers
|
||||
```
|
||||
|
||||
--------------- OR ---------------
|
||||
|
||||
```
|
||||
cat > crb-to-approve-csr.yaml <<EOF
|
||||
# Approve all CSRs for the group "system:bootstrappers"
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: crb-node-autoapprove-csr
|
||||
subjects:
|
||||
- kind: Group
|
||||
name: system:bootstrappers
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
EOF
|
||||
```
|
||||
|
||||
`master$ kubectl create -f crb-to-approve-csr.yaml`
|
||||
|
||||
|
||||
# Auto rotate/renew certificates
|
||||
|
||||
```
|
||||
kubectl create clusterrolebinding crb-autorenew-csr-for-nodes --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
|
||||
```
|
||||
|
||||
--------------- OR ---------------
|
||||
|
||||
```
|
||||
cat > auto-approve-renewals-for-nodes.yaml <<EOF
|
||||
# Approve renewal CSRs for the group "system:nodes"
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: crb-autorenew-csr-for-nodes
|
||||
subjects:
|
||||
- kind: Group
|
||||
name: system:nodes
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
EOF
|
||||
```
|
||||
|
||||
`kubectl create -f auto-approve-renewals-for-nodes.yaml`
|
||||
|
||||
|
||||
# Create bootstrap context on node03
|
||||
|
||||
# Important: Replace the kube-apiserver IP address in the command below with the correct one.
|
||||
This can be obtained by running the below command on the master node:
|
||||
```
|
||||
kubectl cluster-info
|
||||
```
|
||||
Create the kubeconfig file:
|
||||
|
||||
```
|
||||
kubectl config --kubeconfig=/tmp/bootstrap-kubeconfig set-cluster bootstrap --server='https://<replace kube-apiserver IP>:6443' --certificate-authority=/etc/kubernetes/pki/ca.crt
|
||||
kubectl config --kubeconfig=/tmp/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=09426c.g262dkeidk3dx21x
|
||||
kubectl config --kubeconfig=/tmp/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap
|
||||
kubectl config --kubeconfig=/tmp/bootstrap-kubeconfig use-context bootstrap
|
||||
```
|
||||
|
||||
|
||||
# Create Kubelet Service
|
||||
|
||||
Create new service file
|
||||
|
||||
```
|
||||
cat > /etc/systemd/system/kubelet.service <<-EOF
|
||||
[Unit]
|
||||
Description=Kubernetes Kubelet
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/bin/kubelet \
|
||||
--bootstrap-kubeconfig=/tmp/bootstrap-kubeconfig \
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \
|
||||
--register-node=true \
|
||||
--cgroup-driver=cgroupfs \
|
||||
--v=2
|
||||
Restart=on-failure
|
||||
StandardOutput=file:/var/kubeletlog1.log
|
||||
StandardError=file:/var/kubeletlog2.log
|
||||
RestartSec=5
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
EOF
|
||||
```
|
||||
|
||||
Reload service and start kubelet
|
||||
|
||||
```
|
||||
node03$ systemctl daemon-reload
|
||||
node03$ service kubelet start
|
||||
```
|
||||
|
||||
Verify node has joined the cluster
|
||||
|
||||
```
|
||||
master$ kubectl get nodes
|
||||
|
||||
```
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -1 +0,0 @@
|
|||
# Practice Test Solution
|
|
@ -24,17 +24,14 @@ class State(Enum):
|
|||
NONE = 0
|
||||
SCRIPT = 1
|
||||
|
||||
parser = argparse.ArgumentParser(description="Extract scripts from markdown")
|
||||
parser.add_argument("--path", '-p', required=True, help='Path to markdown docs')
|
||||
args = parser.parse_args()
|
||||
|
||||
docs_path = os.path.abspath(args.path)
|
||||
this_file_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
docs_path = os.path.abspath(os.path.join(this_file_dir, '../docs'))
|
||||
|
||||
if not os.path.isdir(docs_path):
|
||||
print (f'Invalid path: {docs_path}')
|
||||
print (f'Expected "docs" at: {docs_path}')
|
||||
exit(1)
|
||||
|
||||
qs_path = os.path.abspath(os.path.join(docs_path, '../quick-steps'))
|
||||
qs_path = os.path.abspath(os.path.join(this_file_dir, '../quick-steps'))
|
||||
|
||||
if not os.path.isdir(qs_path):
|
||||
os.makedirs(qs_path)
|
||||
|
@ -43,6 +40,8 @@ newline = chr(10) # In case running on Windows (plus writing files as bina
|
|||
file_number_rx = re.compile(r'^(?P<number>\d+)')
|
||||
comment_rx = re.compile(r'^\[//\]:\s\#\s\((?P<token>\w+):(?P<value>.*)\)\s*$')
|
||||
choice_rx = re.compile(r'^\s*-+\s+OR\s+-+')
|
||||
ssh_copy_id_rx = re.compile(r'(?P<indent>\s*)ssh-copy-id.*@(?P<host>\w+)')
|
||||
script_begin_rx = re.compile(r'^(?P<indent>\s*)```bash')
|
||||
script_begin = '```bash'
|
||||
script_end = '```'
|
||||
script_open = ('{' + newline).encode('utf-8')
|
||||
|
@ -60,8 +59,12 @@ def write_script(filename: str, script: list):
|
|||
|
||||
output_file_no = 1
|
||||
script = []
|
||||
indent = 0
|
||||
output_file = None
|
||||
for doc in glob.glob(os.path.join(docs_path, '*.md')):
|
||||
for doc in sorted(glob.glob(os.path.join(docs_path, '*.md'))):
|
||||
if 'e2e-tests' in doc:
|
||||
# Skip this for scripted install
|
||||
continue
|
||||
print(doc)
|
||||
state = State.NONE
|
||||
ignore_next_script = False
|
||||
|
@ -128,12 +131,14 @@ for doc in glob.glob(os.path.join(docs_path, '*.md')):
|
|||
'#######################################################################',
|
||||
newline
|
||||
])
|
||||
elif line == script_begin:
|
||||
elif script_begin_rx.match(line):
|
||||
m = script_begin_rx.match(line)
|
||||
indent = len(m['indent'])
|
||||
state = State.SCRIPT
|
||||
elif choice_rx.match(line):
|
||||
ignore_next_script = True
|
||||
elif state == State.SCRIPT:
|
||||
if line == script_end:
|
||||
if line == (' ' * indent) + script_end:
|
||||
state = State.NONE
|
||||
script.append(newline)
|
||||
ignore_next_script = False
|
||||
|
@ -141,8 +146,12 @@ for doc in glob.glob(os.path.join(docs_path, '*.md')):
|
|||
# script.append('}')
|
||||
# script.append(line)
|
||||
# script.append('{')
|
||||
elif not (ignore_next_script or line == '{' or line == '}'):
|
||||
script.append(line)
|
||||
elif not (ignore_next_script or line == (' ' * indent) + '{' or line == (' ' * indent) + '}'):
|
||||
m = ssh_copy_id_rx.match(line)
|
||||
if m:
|
||||
script.append(f'{m["indent"]}echo $(whoami) | sshpass ssh-copy-id -f -o StrictHostKeyChecking=no $(whoami)@{m["host"]}')
|
||||
else:
|
||||
script.append(line[indent:])
|
||||
if script:
|
||||
# fns = '-'.join(file_nos[1:])
|
||||
output_file = os.path.join(qs_path, f'{output_file_no}-{current_host}.sh')
|
||||
|
|
|
@ -6,7 +6,6 @@ A few prerequisites are handled by the VM provisioning steps.
|
|||
|
||||
## Kernel Settings
|
||||
|
||||
1. Disable cgroups v2. I found that Kubernetes currently doesn't play nice with cgroups v2, therefore we need to set a kernel boot parameter in grub to switch back to v1.
|
||||
1. Install the `br_netfilter` kernel module that permits kube-proxy to manipulate IP tables rules.
|
||||
1. Add the two tunables `net.bridge.bridge-nf-call-iptables=1` and `net.ipv4.ip_forward=1` also required for successful pod networking.
|
||||
|
||||
|
@ -17,4 +16,4 @@ A few prerequisites are handled by the VM provisioning steps.
|
|||
|
||||
## Other settings
|
||||
|
||||
1. Install configs for `vim` and `tmux` on master-1
|
||||
1. Install configs for `vim` and `tmux` on controlplane01
|
||||
|
|
|
@ -20,9 +20,9 @@ if ram_selector < 8
|
|||
raise "Unsufficient memory #{RAM_SIZE}GB. min 8GB"
|
||||
end
|
||||
RESOURCES = {
|
||||
"master" => {
|
||||
"control" => {
|
||||
1 => {
|
||||
# master-1 bigger since it may run e2e tests.
|
||||
# controlplane01 bigger since it may run e2e tests.
|
||||
"ram" => [ram_selector * 128, 2048].max(),
|
||||
"cpu" => CPU_CORES >= 12 ? 4 : 2,
|
||||
},
|
||||
|
@ -61,7 +61,7 @@ def provision_kubernetes_node(node)
|
|||
end
|
||||
|
||||
# Define the number of master and worker nodes. You should not change this
|
||||
NUM_MASTER_NODE = 2
|
||||
NUM_CONTROL_NODES = 2
|
||||
NUM_WORKER_NODE = 2
|
||||
|
||||
# Host address start points
|
||||
|
@ -89,21 +89,21 @@ Vagrant.configure("2") do |config|
|
|||
# `vagrant box outdated`. This is not recommended.
|
||||
config.vm.box_check_update = false
|
||||
|
||||
# Provision Master Nodes
|
||||
(1..NUM_MASTER_NODE).each do |i|
|
||||
config.vm.define "master-#{i}" do |node|
|
||||
# Provision Control Nodes
|
||||
(1..NUM_CONTROL_NODES).each do |i|
|
||||
config.vm.define "controlplane0#{i}" do |node|
|
||||
# Name shown in the GUI
|
||||
node.vm.provider "virtualbox" do |vb|
|
||||
vb.name = "kubernetes-ha-master-#{i}"
|
||||
vb.memory = RESOURCES["master"][i > 2 ? 2 : i]["ram"]
|
||||
vb.cpus = RESOURCES["master"][i > 2 ? 2 : i]["cpu"]
|
||||
vb.name = "kubernetes-ha-controlplane-#{i}"
|
||||
vb.memory = RESOURCES["control"][i > 2 ? 2 : i]["ram"]
|
||||
vb.cpus = RESOURCES["control"][i > 2 ? 2 : i]["cpu"]
|
||||
end
|
||||
node.vm.hostname = "master-#{i}"
|
||||
node.vm.hostname = "controlplane0#{i}"
|
||||
node.vm.network :private_network, ip: IP_NW + "#{MASTER_IP_START + i}"
|
||||
node.vm.network "forwarded_port", guest: 22, host: "#{2710 + i}"
|
||||
provision_kubernetes_node node
|
||||
if i == 1
|
||||
# Install (opinionated) configs for vim and tmux on master-1. These used by the author for CKA exam.
|
||||
# Install (opinionated) configs for vim and tmux on controlplane01. These used by the author for CKA exam.
|
||||
node.vm.provision "file", source: "./ubuntu/tmux.conf", destination: "$HOME/.tmux.conf"
|
||||
node.vm.provision "file", source: "./ubuntu/vimrc", destination: "$HOME/.vimrc"
|
||||
end
|
||||
|
@ -127,13 +127,13 @@ Vagrant.configure("2") do |config|
|
|||
|
||||
# Provision Worker Nodes
|
||||
(1..NUM_WORKER_NODE).each do |i|
|
||||
config.vm.define "worker-#{i}" do |node|
|
||||
config.vm.define "node0#{i}" do |node|
|
||||
node.vm.provider "virtualbox" do |vb|
|
||||
vb.name = "kubernetes-ha-worker-#{i}"
|
||||
vb.name = "kubernetes-ha-node-#{i}"
|
||||
vb.memory = RESOURCES["worker"]["ram"]
|
||||
vb.cpus = RESOURCES["worker"]["cpu"]
|
||||
end
|
||||
node.vm.hostname = "worker-#{i}"
|
||||
node.vm.hostname = "node0#{i}"
|
||||
node.vm.network :private_network, ip: IP_NW + "#{NODE_IP_START + i}"
|
||||
node.vm.network "forwarded_port", guest: 22, host: "#{2720 + i}"
|
||||
provision_kubernetes_node node
|
||||
|
|
|
@ -8,11 +8,11 @@ FAILED='\033[0;31;1m'
|
|||
NC='\033[0m'
|
||||
|
||||
# IP addresses
|
||||
INTERNAL_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)
|
||||
MASTER_1=$(dig +short master-1)
|
||||
MASTER_2=$(dig +short master-2)
|
||||
WORKER_1=$(dig +short worker-1)
|
||||
WORKER_2=$(dig +short worker-2)
|
||||
PRIMARY_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)
|
||||
CONTROL01=$(dig +short controlplane01)
|
||||
CONTROL02=$(dig +short controlplane02)
|
||||
NODE01=$(dig +short node01)
|
||||
NODE02=$(dig +short node02)
|
||||
LOADBALANCER=$(dig +short loadbalancer)
|
||||
LOCALHOST="127.0.0.1"
|
||||
|
||||
|
@ -76,21 +76,21 @@ SYSTEMD_KS_FILE=/etc/systemd/system/kube-scheduler.service
|
|||
### WORKER NODES ###
|
||||
|
||||
# Worker-1 cert details
|
||||
WORKER_1_CERT=/var/lib/kubelet/worker-1.crt
|
||||
WORKER_1_KEY=/var/lib/kubelet/worker-1.key
|
||||
NODE01_CERT=/var/lib/kubelet/node01.crt
|
||||
NODE01_KEY=/var/lib/kubelet/node01.key
|
||||
|
||||
# Worker-1 kubeconfig location
|
||||
WORKER_1_KUBECONFIG=/var/lib/kubelet/kubeconfig
|
||||
NODE01_KUBECONFIG=/var/lib/kubelet/kubeconfig
|
||||
|
||||
# Worker-1 kubelet config location
|
||||
WORKER_1_KUBELET=/var/lib/kubelet/kubelet-config.yaml
|
||||
NODE01_KUBELET=/var/lib/kubelet/kubelet-config.yaml
|
||||
|
||||
# Systemd worker-1 kubelet location
|
||||
SYSTEMD_WORKER_1_KUBELET=/etc/systemd/system/kubelet.service
|
||||
# Systemd node01 kubelet location
|
||||
SYSTEMD_NODE01_KUBELET=/etc/systemd/system/kubelet.service
|
||||
|
||||
# kube-proxy worker-1 location
|
||||
WORKER_1_KP_KUBECONFIG=/var/lib/kube-proxy/kubeconfig
|
||||
SYSTEMD_WORKER_1_KP=/etc/systemd/system/kube-proxy.service
|
||||
# kube-proxy node01 location
|
||||
NODE01_KP_KUBECONFIG=/var/lib/kube-proxy/kubeconfig
|
||||
SYSTEMD_NODE01_KP=/etc/systemd/system/kube-proxy.service
|
||||
|
||||
|
||||
# Function - Master node #
|
||||
|
@ -305,8 +305,8 @@ check_systemd_etcd()
|
|||
exit 1
|
||||
fi
|
||||
|
||||
if [ $IAP_URL == "https://$INTERNAL_IP:2380" ] && [ $LP_URL == "https://$INTERNAL_IP:2380" ] && [ $LC_URL == "https://$INTERNAL_IP:2379,https://127.0.0.1:2379" ] && \
|
||||
[ $AC_URL == "https://$INTERNAL_IP:2379" ]
|
||||
if [ $IAP_URL == "https://$PRIMARY_IP:2380" ] && [ $LP_URL == "https://$PRIMARY_IP:2380" ] && [ $LC_URL == "https://$PRIMARY_IP:2379,https://127.0.0.1:2379" ] && \
|
||||
[ $AC_URL == "https://$PRIMARY_IP:2379" ]
|
||||
then
|
||||
printf "${SUCCESS}ETCD initial-advertise-peer-urls, listen-peer-urls, listen-client-urls, advertise-client-urls are correct\n${NC}"
|
||||
else
|
||||
|
@ -349,7 +349,7 @@ check_systemd_api()
|
|||
SACERT="${PKI}/service-account.crt"
|
||||
KCCERT="${PKI}/apiserver-kubelet-client.crt"
|
||||
KCKEY="${PKI}/apiserver-kubelet-client.key"
|
||||
if [ $ADVERTISE_ADDRESS == $INTERNAL_IP ] && [ $CLIENT_CA_FILE == $CACERT ] && [ $ETCD_CA_FILE == $CACERT ] && \
|
||||
if [ $ADVERTISE_ADDRESS == $PRIMARY_IP ] && [ $CLIENT_CA_FILE == $CACERT ] && [ $ETCD_CA_FILE == $CACERT ] && \
|
||||
[ $ETCD_CERT_FILE == "${PKI}/etcd-server.crt" ] && [ $ETCD_KEY_FILE == "${PKI}/etcd-server.key" ] && \
|
||||
[ $KUBELET_CERTIFICATE_AUTHORITY == $CACERT ] && [ $KUBELET_CLIENT_CERTIFICATE == $KCCERT ] && [ $KUBELET_CLIENT_KEY == $KCKEY ] && \
|
||||
[ $SERVICE_ACCOUNT_KEY_FILE == $SACERT ] && [ $TLS_CERT_FILE == $APICERT ] && [ $TLS_PRIVATE_KEY_FILE == $APIKEY ]
|
||||
|
@ -435,15 +435,15 @@ if [ ! -z "$1" ]
|
|||
then
|
||||
choice=$1
|
||||
else
|
||||
echo "This script will validate the certificates in master as well as worker-1 nodes. Before proceeding, make sure you ssh into the respective node [ Master or Worker-1 ] for certificate validation"
|
||||
echo "This script will validate the certificates in master as well as node01 nodes. Before proceeding, make sure you ssh into the respective node [ Master or Worker-1 ] for certificate validation"
|
||||
while true
|
||||
do
|
||||
echo
|
||||
echo " 1. Verify certificates on Master Nodes after step 4"
|
||||
echo " 2. Verify kubeconfigs on Master Nodes after step 5"
|
||||
echo " 3. Verify kubeconfigs and PKI on Master Nodes after step 8"
|
||||
echo " 4. Verify kubeconfigs and PKI on worker-1 Node after step 10"
|
||||
echo " 5. Verify kubeconfigs and PKI on worker-2 Node after step 11"
|
||||
echo " 4. Verify kubeconfigs and PKI on node01 Node after step 10"
|
||||
echo " 5. Verify kubeconfigs and PKI on node02 Node after step 11"
|
||||
echo
|
||||
echo -n "Please select one of the above options: "
|
||||
read choice
|
||||
|
@ -469,9 +469,9 @@ SUBJ_APIKC="Subject:CN=kube-apiserver-kubelet-client,O=system:masters"
|
|||
case $choice in
|
||||
|
||||
1)
|
||||
if ! [ "${HOST}" = "master-1" -o "${HOST}" = "master-2" ]
|
||||
if ! [ "${HOST}" = "controlplane01" -o "${HOST}" = "controlplane02" ]
|
||||
then
|
||||
printf "${FAILED}Must run on master-1 or master-2${NC}\n"
|
||||
printf "${FAILED}Must run on controlplane01 or controlplane02${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -486,7 +486,7 @@ case $choice in
|
|||
check_cert_and_key "apiserver-kubelet-client" $SUBJ_APIKC $CERT_ISSUER
|
||||
check_cert_and_key "etcd-server" $SUBJ_ETCD $CERT_ISSUER
|
||||
|
||||
if [ "${HOST}" = "master-1" ]
|
||||
if [ "${HOST}" = "controlplane01" ]
|
||||
then
|
||||
check_cert_and_key "admin" $SUBJ_ADMIN $CERT_ISSUER
|
||||
check_cert_and_key "kube-proxy" $SUBJ_KP $CERT_ISSUER
|
||||
|
@ -494,9 +494,9 @@ case $choice in
|
|||
;;
|
||||
|
||||
2)
|
||||
if ! [ "${HOST}" = "master-1" -o "${HOST}" = "master-2" ]
|
||||
if ! [ "${HOST}" = "controlplane01" -o "${HOST}" = "controlplane02" ]
|
||||
then
|
||||
printf "${FAILED}Must run on master-1 or master-2${NC}\n"
|
||||
printf "${FAILED}Must run on controlplane01 or controlplane02${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -504,16 +504,16 @@ case $choice in
|
|||
check_kubeconfig_exists "kube-controller-manager" $HOME
|
||||
check_kubeconfig_exists "kube-scheduler" $HOME
|
||||
|
||||
if [ "${HOST}" = "master-1" ]
|
||||
if [ "${HOST}" = "controlplane01" ]
|
||||
then
|
||||
check_kubeconfig_exists "kube-proxy" $HOME
|
||||
fi
|
||||
;;
|
||||
|
||||
3)
|
||||
if ! [ "${HOST}" = "master-1" -o "${HOST}" = "master-2" ]
|
||||
if ! [ "${HOST}" = "controlplane01" -o "${HOST}" = "controlplane02" ]
|
||||
then
|
||||
printf "${FAILED}Must run on master-1 or master-2${NC}\n"
|
||||
printf "${FAILED}Must run on controlplane01 or controlplane02${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -540,24 +540,24 @@ case $choice in
|
|||
;;
|
||||
|
||||
4)
|
||||
if ! [ "${HOST}" = "worker-1" ]
|
||||
if ! [ "${HOST}" = "node01" ]
|
||||
then
|
||||
printf "${FAILED}Must run on worker-1${NC}\n"
|
||||
printf "${FAILED}Must run on node01${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
CERT_LOCATION=/var/lib/kubernetes/pki
|
||||
check_cert_only "ca" $SUBJ_CA $CERT_ISSUER
|
||||
check_cert_and_key "kube-proxy" $SUBJ_KP $CERT_ISSUER
|
||||
check_cert_and_key "worker-1" "Subject:CN=system:node:worker-1,O=system:nodes" $CERT_ISSUER
|
||||
check_cert_and_key "node01" "Subject:CN=system:node:node01,O=system:nodes" $CERT_ISSUER
|
||||
check_kubeconfig "kube-proxy" "/var/lib/kube-proxy" "https://${LOADBALANCER}:6443"
|
||||
check_kubeconfig "kubelet" "/var/lib/kubelet" "https://${LOADBALANCER}:6443"
|
||||
;;
|
||||
|
||||
5)
|
||||
if ! [ "${HOST}" = "worker-2" ]
|
||||
if ! [ "${HOST}" = "node02" ]
|
||||
then
|
||||
printf "${FAILED}Must run on worker-2${NC}\n"
|
||||
printf "${FAILED}Must run on node02${NC}\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -566,7 +566,7 @@ case $choice in
|
|||
check_cert_and_key "kube-proxy" $SUBJ_KP $CERT_ISSUER
|
||||
|
||||
CERT_LOCATION=/var/lib/kubelet/pki
|
||||
check_cert_only "kubelet-client-current" "Subject:O=system:nodes,CN=system:node:worker-2" $CERT_ISSUER
|
||||
check_cert_only "kubelet-client-current" "Subject:O=system:nodes,CN=system:node:node02" $CERT_ISSUER
|
||||
check_kubeconfig "kube-proxy" "/var/lib/kube-proxy" "https://${LOADBALANCER}:6443"
|
||||
;;
|
||||
|
||||
|
|
|
@ -1,5 +1,22 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Enable password auth in sshd so we can use ssh-copy-id
|
||||
sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
|
||||
sed -i --regexp-extended 's/#?PasswordAuthentication (yes|no)/PasswordAuthentication yes/' /etc/ssh/sshd_config
|
||||
sed -i --regexp-extended 's/#?Include \/etc\/ssh\/sshd_config.d\/\*.conf/#Include \/etc\/ssh\/sshd_config.d\/\*.conf/' /etc/ssh/sshd_config
|
||||
sed -i 's/KbdInteractiveAuthentication no/KbdInteractiveAuthentication yes/' /etc/ssh/sshd_config
|
||||
systemctl restart sshd
|
||||
|
||||
if [ ! -d /home/vagrant/.ssh ]
|
||||
then
|
||||
mkdir /home/vagrant/.ssh
|
||||
chmod 700 /home/vagrant/.ssh
|
||||
chown vagrant:vagrant /home/vagrant/.ssh
|
||||
fi
|
||||
|
||||
|
||||
if [ "$(hostname)" = "controlplane01" ]
|
||||
then
|
||||
sh -c 'sudo apt update' &> /dev/null
|
||||
sh -c 'sudo apt-get install -y sshpass' &> /dev/null
|
||||
fi
|
||||
|
||||
|
|
|
@ -1,12 +1,21 @@
|
|||
#!/bin/bash
|
||||
#
|
||||
# Set up /etc/hosts so we can resolve all the machines in the VirtualBox network
|
||||
set -ex
|
||||
set -e
|
||||
IFNAME=$1
|
||||
THISHOST=$2
|
||||
ADDRESS="$(ip -4 addr show $IFNAME | grep "inet" | head -1 |awk '{print $2}' | cut -d/ -f1)"
|
||||
NETWORK=$(echo $ADDRESS | awk 'BEGIN {FS="."} ; { printf("%s.%s.%s", $1, $2, $3) }')
|
||||
sed -e "s/^.*${HOSTNAME}.*/${ADDRESS} ${HOSTNAME} ${HOSTNAME}.local/" -i /etc/hosts
|
||||
|
||||
# Host will have 3 interfaces: lo, DHCP assigned NAT network and static on VM network
|
||||
# We want the VM network
|
||||
PRIMARY_IP="$(ip -4 addr show | grep "inet" | egrep -v '(dynamic|127\.0\.0)' | awk '{print $2}' | cut -d/ -f1)"
|
||||
NETWORK=$(echo $PRIMARY_IP | awk 'BEGIN {FS="."} ; { printf("%s.%s.%s", $1, $2, $3) }')
|
||||
#sed -e "s/^.*${HOSTNAME}.*/${PRIMARY_IP} ${HOSTNAME} ${HOSTNAME}.local/" -i /etc/hosts
|
||||
|
||||
# Export PRIMARY IP as an environment variable
|
||||
echo "PRIMARY_IP=${PRIMARY_IP}" >> /etc/environment
|
||||
|
||||
# Export architecture as environment variable to download correct versions of software
|
||||
echo "ARCH=amd64" | sudo tee -a /etc/environment > /dev/null
|
||||
|
||||
# remove ubuntu-jammy entry
|
||||
sed -e '/^.*ubuntu-jammy.*/d' -i /etc/hosts
|
||||
|
@ -14,9 +23,9 @@ sed -e "/^.*$2.*/d" -i /etc/hosts
|
|||
|
||||
# Update /etc/hosts about other hosts
|
||||
cat >> /etc/hosts <<EOF
|
||||
${NETWORK}.11 master-1
|
||||
${NETWORK}.12 master-2
|
||||
${NETWORK}.21 worker-1
|
||||
${NETWORK}.22 worker-2
|
||||
${NETWORK}.11 controlplane01
|
||||
${NETWORK}.12 controlplane02
|
||||
${NETWORK}.21 node01
|
||||
${NETWORK}.22 node02
|
||||
${NETWORK}.30 loadbalancer
|
||||
EOF
|
||||
|
|