Adjust markdown formatting (#328)

* Adjust markdown formatting:

* Remove extra capitalization.
* Remove extra curly braces {} inside Bash code blocks.
* Use in-line code block `` for IP-addresses, file names and commands.
* Add a dot at the end of sentences.
* Use list formatting in `differences-to-original.md`. Also add escaping for angle brackets <>.
* No logic changes were made, only formatting improvements.

* 01-prerequisites.md: remove extra capitalization, remove extra space in "Virtual Box"

* 01-prerequisites.md: split text into different lines (before, it was rendered into one line)

* Remove extra capitalization, use inline code blocks, add a dot at the end of sentences.

* 02-compute-resources.md: add escaping for angle brackets <>.

* 03-client-tools.md: remove extra capitalization, use inline code blocks

* 04-certificate-authority.md: remove extra capitalization, use inline code blocks, remove extra curly braces {} inside Bash code blocks

* 04-certificate-authority.md: remove extra curly braces {} inside Bash code blocks

* Revert back: all "remove extra curly braces {} inside Bash code blocks"

As per @fireflycons https://github.com/mmumshad/kubernetes-the-hard-way/pull/328#issuecomment-1926329908 :

> They are there for a reason. If you paste a block of code within braces, then it is not executed immediately by the shell - you have to press ENTER. Quite often when making changes to this repo and I have multiple terminals open, it gives me a chance to check that I have pasted the block into the correct terminal before it executes in the wrong terminal and borks everything.

* Revert back: all "remove extra curly braces {} inside Bash code blocks"

* Revert back all "Remove extra capitalization", as per request @fireflycons

https://github.com/mmumshad/kubernetes-the-hard-way/pull/328#issuecomment-1944388993
pull/634/head
Alexey Vazhnov 2024-02-21 20:50:31 +00:00 committed by GitHub
parent e982efed9e
commit 645b296cb6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
10 changed files with 71 additions and 76 deletions

View File

@ -2,12 +2,12 @@
## VM Hardware Requirements
8 GB of RAM (Preferably 16 GB)
50 GB Disk space
- 8 GB of RAM (preferably 16 GB)
- 50 GB disk space
## Virtual Box
## VirtualBox
Download and Install [VirtualBox](https://www.virtualbox.org/wiki/Downloads) on any one of the supported platforms:
Download and install [VirtualBox](https://www.virtualbox.org/wiki/Downloads) on any one of the supported platforms:
- Windows hosts
- OS X hosts (x86 only, not Apple Silicon M-series)
@ -19,7 +19,7 @@ Download and Install [VirtualBox](https://www.virtualbox.org/wiki/Downloads) on
Once VirtualBox is installed you may chose to deploy virtual machines manually on it.
Vagrant provides an easier way to deploy multiple virtual machines on VirtualBox more consistently.
Download and Install [Vagrant](https://www.vagrantup.com/) on your platform.
Download and install [Vagrant](https://www.vagrantup.com/) on your platform.
- Windows
- Debian
@ -43,7 +43,7 @@ If you do change any of these, **please consider that a personal preference and
### Virtual Machine Network
The network used by the Virtual Box virtual machines is `192.168.56.0/24`.
The network used by the VirtualBox virtual machines is `192.168.56.0/24`.
To change this, edit the [Vagrantfile](../vagrant/Vagrantfile) in your cloned copy (do not edit directly in github), and set the new value for the network prefix at line 9. This should not overlap any of the other network settings.

View File

@ -1,14 +1,14 @@
# Provisioning Compute Resources
Note: You must have VirtualBox and Vagrant configured at this point
Note: You must have VirtualBox and Vagrant configured at this point.
Download this github repository and cd into the vagrant folder
Download this github repository and cd into the vagrant folder:
```bash
git clone https://github.com/mmumshad/kubernetes-the-hard-way.git
```
CD into vagrant directory
CD into vagrant directory:
```bash
cd kubernetes-the-hard-way/vagrant
@ -18,7 +18,7 @@ The `Vagrantfile` is configured to assume you have at least an 8 core CPU which
This will not work if you have less than 8GB of RAM.
Run Vagrant up
Run Vagrant up:
```bash
vagrant up
@ -29,10 +29,10 @@ This does the below:
- Deploys 5 VMs - 2 Master, 2 Worker and 1 Loadbalancer with the name 'kubernetes-ha-* '
> This is the default settings. This can be changed at the top of the Vagrant file.
> If you choose to change these settings, please also update vagrant/ubuntu/vagrant/setup-hosts.sh
> to add the additional hosts to the /etc/hosts default before running "vagrant up".
> If you choose to change these settings, please also update `vagrant/ubuntu/vagrant/setup-hosts.sh`
> to add the additional hosts to the `/etc/hosts` default before running `vagrant up`.
- Set's IP addresses in the range 192.168.56
- Set's IP addresses in the range `192.168.56.x`
| VM | VM Name | Purpose | IP | Forwarded Port | RAM |
| ------------ | ---------------------- |:-------------:| -------------:| ----------------:|-----:|
@ -57,27 +57,27 @@ There are two ways to SSH into the nodes:
### 1. SSH using Vagrant
From the directory you ran the `vagrant up` command, run `vagrant ssh <vm>` for example `vagrant ssh master-1`.
From the directory you ran the `vagrant up` command, run `vagrant ssh \<vm\>` for example `vagrant ssh master-1`.
> Note: Use VM field from the above table and not the VM name itself.
### 2. SSH Using SSH Client Tools
Use your favourite SSH Terminal tool (putty).
Use your favourite SSH terminal tool (putty).
Use the above IP addresses. Username and password based SSH is disabled by default.
Vagrant generates a private key for each of these VMs. It is placed under the .vagrant folder (in the directory you ran the `vagrant up` command from) at the below path for each VM:
Use the above IP addresses. Username and password-based SSH is disabled by default.
**Private Key Path:** `.vagrant/machines/<machine name>/virtualbox/private_key`
Vagrant generates a private key for each of these VMs. It is placed under the `.vagrant` folder (in the directory you ran the `vagrant up` command from) at the below path for each VM:
**Username/Password:** `vagrant/vagrant`
- **Private key path**: `.vagrant/machines/\<machine name\>/virtualbox/private_key`
- **Username/password**: `vagrant/vagrant`
## Verify Environment
- Ensure all VMs are up
- Ensure VMs are assigned the above IP addresses
- Ensure you can SSH into these VMs using the IP and private keys, or `vagrant ssh`
- Ensure the VMs can ping each other
- Ensure all VMs are up.
- Ensure VMs are assigned the above IP addresses.
- Ensure you can SSH into these VMs using the IP and private keys, or `vagrant ssh`.
- Ensure the VMs can ping each other.
## Troubleshooting Tips
@ -86,7 +86,7 @@ Vagrant generates a private key for each of these VMs. It is placed under the .v
If any of the VMs failed to provision, or is not configured correct, delete the VM using the command:
```bash
vagrant destroy <vm>
vagrant destroy \<vm\>
```
Then re-provision. Only the missing VMs will be re-provisioned
@ -108,7 +108,7 @@ In such cases delete the VM, then delete the VM folder and then re-provision, e.
```bash
vagrant destroy worker-2
rmdir "<path-to-vm-folder>\kubernetes-ha-worker-2
rmdir "\<path-to-vm-folder\>\kubernetes-ha-worker-2
vagrant up
```
@ -118,7 +118,7 @@ This will most likely happen at "Waiting for machine to reboot"
1. Hit `CTRL+C`
1. Kill any running `ruby` process, or Vagrant will complain.
1. Destroy the VM that got stuck: `vagrant destroy <vm>`
1. Destroy the VM that got stuck: `vagrant destroy \<vm\>`
1. Re-provision. It will pick up where it left off: `vagrant up`
# Pausing the Environment
@ -127,13 +127,13 @@ You do not need to complete the entire lab in one session. You may shut down and
To shut down. This will gracefully shut down all the VMs in the reverse order to which they were started:
```
```bash
vagrant halt
```
To power on again:
```
```bash
vagrant up
```

View File

@ -1,6 +1,6 @@
# Installing the Client Tools
First identify a system from where you will perform administrative tasks, such as creating certificates, kubeconfig files and distributing them to the different VMs.
First identify a system from where you will perform administrative tasks, such as creating certificates, `kubeconfig` files and distributing them to the different VMs.
If you are on a Linux laptop, then your laptop could be this system. In my case I chose the `master-1` node to perform administrative tasks. Whichever system you chose make sure that system is able to access all the provisioned VMs through SSH to copy files over.
@ -8,7 +8,7 @@ If you are on a Linux laptop, then your laptop could be this system. In my case
Here we create an SSH key pair for the `vagrant` user who we are logged in as. We will copy the public key of this pair to the other master and both workers to permit us to use password-less SSH (and SCP) go get from `master-1` to these other nodes in the context of the `vagrant` user which exists on all nodes.
Generate Key Pair on `master-1` node
Generate SSH key pair on `master-1` node:
[//]: # (host:master-1)
@ -18,7 +18,7 @@ ssh-keygen
Leave all settings to default by pressing `ENTER` at any prompt.
Add this key to the local authorized_keys (`master-1`) as in some commands we scp to ourself.
Add this key to the local `authorized_keys` (`master-1`) as in some commands we `scp` to ourself.
```bash
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
@ -46,11 +46,11 @@ and check to make sure that only the key(s) you wanted were added.
## Install kubectl
The [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl). command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
The [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl) command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
Reference: [https://kubernetes.io/docs/tasks/tools/install-kubectl/](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
We will be using kubectl early on to generate kubeconfig files for the controlplane components.
We will be using `kubectl` early on to generate `kubeconfig` files for the controlplane components.
### Linux
@ -68,7 +68,7 @@ Verify `kubectl` is installed:
kubectl version -o yaml
```
> output will be similar to this, although versions may be newer
output will be similar to this, although versions may be newer:
```
kubectl version -o yaml

View File

@ -24,7 +24,7 @@ MASTER_2=$(dig +short master-2)
LOADBALANCER=$(dig +short loadbalancer)
```
Compute cluster internal API server service address, which is always .1 in the service CIDR range. This is also required as a SAN in the API server certificate. Run the following:
Compute cluster internal API server service address, which is always `.1` in the service CIDR range. This is also required as a SAN in the API server certificate. Run the following:
```bash
SERVICE_CIDR=10.96.0.0/24
@ -53,7 +53,6 @@ The output should look like this. If you changed any of the defaults mentioned i
Create a CA certificate, then generate a Certificate Signing Request and use it to create a private key:
```bash
{
# Create private key for CA
@ -66,6 +65,7 @@ Create a CA certificate, then generate a Certificate Signing Request and use it
openssl x509 -req -in ca.csr -signkey ca.key -CAcreateserial -out ca.crt -days 1000
}
```
Results:
```
@ -75,9 +75,10 @@ ca.key
Reference : https://kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl
The ca.crt is the Kubernetes Certificate Authority certificate and ca.key is the Kubernetes Certificate Authority private key.
You will use the ca.crt file in many places, so it will be copied to many places.
The ca.key is used by the CA for signing certificates. And it should be securely stored. In this case our master node(s) is our CA server as well, so we will store it on master node(s). There is no need to copy this file elsewhere.
The `ca.crt` is the Kubernetes Certificate Authority certificate and `ca.key` is the Kubernetes Certificate Authority private key.
You will use the `ca.crt` file in many places, so it will be copied to many places.
The `ca.key` is used by the CA for signing certificates. And it should be securely stored. In this case our master node(s) is our CA server as well, so we will store it on master node(s). There is no need to copy this file elsewhere.
## Client and Server Certificates
@ -100,7 +101,7 @@ Generate the `admin` client certificate and private key:
}
```
Note that the admin user is part of the **system:masters** group. This is how we are able to perform any administrative operations on Kubernetes cluster using kubectl utility.
Note that the admin user is part of the **system:masters** group. This is how we are able to perform any administrative operations on Kubernetes cluster using `kubectl` utility.
Results:
@ -109,7 +110,7 @@ admin.key
admin.crt
```
The admin.crt and admin.key file gives you administrative access. We will configure these to be used with the kubectl tool to perform administrative functions on kubernetes.
The `admin.crt` and `admin.key` file gives you administrative access. We will configure these to be used with the `kubectl` tool to perform administrative functions on Kubernetes.
### The Kubelet Client Certificates
@ -242,7 +243,7 @@ kube-apiserver.key
# The Kubelet Client Certificate
This certificate is for the api server to authenticate with the kubelets when it requests information from them
This certificate is for the API server to authenticate with the kubelets when it requests information from them
```bash
cat > openssl-kubelet.cnf <<EOF
@ -354,11 +355,11 @@ Run the following, and select option 1 to check all required certificates were g
[//]: # (command:./cert_verify.sh 1)
```
```bash
./cert_verify.sh
```
> Expected output
Expected output:
```
PKI generated correctly!

View File

@ -4,7 +4,7 @@ In this lab you will generate [Kubernetes configuration files](https://kubernete
Note: It is good practice to use file paths to certificates in kubeconfigs that will be used by the services. When certificates are updated, it is not necessary to regenerate the config files, as you would have to if the certificate data was embedded. Note also that the cert files don't exist in these paths yet - we will place them in later labs.
User configs, like admin.kubeconfig will have the certificate info embedded within them.
User configs, like `admin.kubeconfig` will have the certificate info embedded within them.
## Client Authentication Configs

View File

@ -8,7 +8,7 @@ Note that in a production-ready cluster it is recommended to have an odd number
The commands in this lab up as far as the load balancer configuration must be run on each controller instance: `master-1`, and `master-2`. Login to each controller instance using SSH Terminal.
You can perform this step with [tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux)
You can perform this step with [tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux).
## Provision the Kubernetes Control Plane

View File

@ -12,7 +12,7 @@ Here we will install the container runtime `containerd` from the Ubuntu distribu
[//]: # (host:worker-1-worker-2)
You can perform this step with [tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux)
You can perform this step with [tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux).
Set up the Kubernetes `apt` repository
@ -41,6 +41,7 @@ Set up `containerd` configuration to enable systemd Cgroups
```bash
{
sudo mkdir -p /etc/containerd
containerd config default | sed 's/SystemdCgroup = false/SystemdCgroup = true/' | sudo tee /etc/containerd/config.toml
}
```

View File

@ -137,6 +137,7 @@ Install the worker binaries:
```
### Configure the Kubelet
On worker-1:
Copy keys and config to correct directories and secure
@ -223,6 +224,7 @@ EOF
```
### Configure the Kubernetes Proxy
On worker-1:
```bash
@ -275,6 +277,7 @@ At `worker-1` node, run the following, selecting option 4
### Start the Worker Services
On worker-1:
```bash
{

View File

@ -1,21 +1,11 @@
# Differences between original and this solution
Platform: I use VirtualBox to setup a local cluster, the original one uses GCP
Nodes: 2 Master and 2 Worker vs 2 Master and 3 Worker nodes
Configure 1 worker node normally
and the second one with TLS bootstrap
Node Names: I use worker-1 worker-2 instead of worker-0 worker-1
IP Addresses: I use statically assigned IPs on private network
Certificate File Names: I use <name>.crt for public certificate and <name>.key for private key file. Whereas original one uses <name>-.pem for certificate and <name>-key.pem for private key.
I generate separate certificates for etcd-server instead of using kube-apiserver
Network:
We use weavenet
Add E2E Tests
* Platform: I use VirtualBox to setup a local cluster, the original one uses GCP.
* Nodes: 2 master and 2 worker vs 2 master and 3 worker nodes.
* Configure 1 worker node normally and the second one with TLS bootstrap.
* Node names: I use worker-1 worker-2 instead of worker-0 worker-1.
* IP Addresses: I use statically assigned IPs on private network.
* Certificate file names: I use \<name\>.crt for public certificate and \<name\>.key for private key file. Whereas original one uses \<name\>-.pem for certificate and \<name\>-key.pem for private key.
* I generate separate certificates for etcd-server instead of using kube-apiserver.
* Network: we use weavenet.
* Add E2E Tests.

View File

@ -7,7 +7,7 @@ A few prerequisites are handled by the VM provisioning steps.
## Kernel Settings
1. Disable cgroups v2. I found that Kubernetes currently doesn't play nice with cgroups v2, therefore we need to set a kernel boot parameter in grub to switch back to v1.
1. Install the `br_netfilter` kernel module that permits kube-proxy to manipulate IP tables rules
1. Install the `br_netfilter` kernel module that permits kube-proxy to manipulate IP tables rules.
1. Add the two tunables `net.bridge.bridge-nf-call-iptables=1` and `net.ipv4.ip_forward=1` also required for successful pod networking.
## DNS settings