Corrections made after replaying the whole procedure on a new fresh environment

pull/582/head
Nemo 2020-06-22 12:42:19 +02:00
parent b509f173d5
commit ddcb5d4d75
9 changed files with 78 additions and 49 deletions

View File

@ -39,7 +39,7 @@ All the Kubernetes nodes (workers and controllers) only need one network interfa
![proxmox vm hardware](images/proxmox-vm-hardware.PNG)
The reverse proxy / client tools / gateway VM need to have 2 network interfaces, one linked to the private Kubernetes network (`vmbr8`) and the other linked to the public network (`vmbr0`).
The reverse proxy / client tools / gateway VM needs 2 network interfaces, one linked to the private Kubernetes network (`vmbr8`) and the other linked to the public network (`vmbr0`).
![proxmox vm hardware](images/proxmox-vm-hardware-gw.PNG)
@ -97,7 +97,7 @@ iface ens19 inet static
* Define the VM hostname:
```bash
hostnamectl set-hostname gateway-01
sudo hostnamectl set-hostname gateway-01
```
* Update the packages list and update the system:
@ -106,10 +106,10 @@ hostnamectl set-hostname gateway-01
sudo apt-get update && sudo apt-get upgrade -y
```
* Install SSH, vim, tmux, NTP and iptables-persistent:
* Install SSH, vim, tmux, curl, NTP and iptables-persistent:
```bash
sudo apt-get install ssh vim tmux ntp iptables-persistent -y
sudo apt-get install ssh vim tmux curl ntp iptables-persistent -y
```
* Enable and start the SSH and NTP services:
@ -124,8 +124,8 @@ sudo systemctl start ssh
* Enable IP routing:
```bash
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
echo '1' > /proc/sys/net/ipv4/ip_forward
sudo echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
sudo echo '1' > /proc/sys/net/ipv4/ip_forward
```
> If you want, you can define the IPv6 stack configuration.
@ -160,10 +160,10 @@ COMMIT
* If you want to restore/active iptables rules:
```bash
iptables-restore < /etc/iptables/rules.v4
sudo iptables-restore < /etc/iptables/rules.v4
```
* Configure the /etc/hosts file (you need to replace `PUBLIC_GW_IP`):
* Configure the `/etc/hosts` file (you need to replace `PUBLIC_GW_IP`):
```bash
127.0.0.1 localhost
@ -226,7 +226,7 @@ network:
* Define the VM hostname (example for controller-0):
```bash
hostnamectl set-hostname controller-0
sudo hostnamectl set-hostname controller-0
```
* Update the packages list and update the system:
@ -250,7 +250,7 @@ sudo systemctl enable ssh
sudo systemctl start ssh
```
* Configure /etc/hosts file. Example for controller-0 (need to replace `PUBLIC_GW_IP` and adapt this sample config for each VM):
* Configure `/etc/hosts` file. Example for controller-0 (need to replace `PUBLIC_GW_IP` and adapt this sample config for each VM):
```bash
127.0.0.1 localhost

View File

@ -30,7 +30,7 @@ Verify `cfssl` and `cfssljson` version 1.3.4 or higher is installed:
cfssl version
```
> output
> Output:
```bash
Version: 1.3.4
@ -42,6 +42,8 @@ Runtime: go1.13
cfssljson --version
```
> Output:
```bash
Version: 1.3.4
Revision: dev
@ -72,7 +74,7 @@ Verify `kubectl` version 1.15.3 or higher is installed:
kubectl version --client
```
> output
> Output:
```bash
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

View File

@ -104,6 +104,8 @@ Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/doc
On the `gateway-01` VM, generate a certificate and private key for each Kubernetes worker node (you need to replace YOUR_EXTERNAL_IP by your external IP address):
```bash
EXTERNAL_IP=YOUR_EXTERNAL_IP
for id_instance in 0 1 2; do
cat > worker-${id_instance}-csr.json <<EOF
{
@ -124,8 +126,6 @@ cat > worker-${id_instance}-csr.json <<EOF
}
EOF
EXTERNAL_IP=YOUR_EXTERNAL_IP
INTERNAL_IP=192.168.8.2${id_instance}
cfssl gencert \
@ -361,7 +361,7 @@ Copy the appropriate certificates and private keys to each worker instance:
```bash
for instance in worker-0 worker-1 worker-2; do
scp ca.pem ${instance}-key.pem ${instance}.pem $root@{instance}:~/
scp ca.pem ${instance}-key.pem ${instance}.pem root@${instance}:~/
done
```

View File

@ -7,7 +7,7 @@ Kubernetes components are stateless and store cluster state in [etcd](https://gi
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `ssh` command. Example for `controller-0`:
```bash
ssh controller-0
ssh root@controller-0
```
### Running commands in parallel with tmux

View File

@ -7,7 +7,7 @@ In this lab you will bootstrap the Kubernetes control plane across three VM inst
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `ssh` command. Example:
```bash
ssh controller-0
ssh root@controller-0
```
### Running commands in parallel with tmux
@ -209,20 +209,18 @@ etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
```
Test the nginx HTTP health check proxy:
Test the HTTPS health check :
```bash
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
curl -kH "Host: kubernetes.default.svc.cluster.local" -i https://127.0.0.1:6443/healthz
```
```bash
HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu)
Date: Sat, 14 Sep 2019 18:34:11 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
X-Content-Type-Options: nosniff
HTTP/2 200
content-type: text/plain; charset=utf-8
x-content-type-options: nosniff
content-length: 2
date: Mon, 22 Jun 2020 09:44:42 GMT
ok
```
@ -238,7 +236,7 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
```bash
ssh controller-0
ssh root@controller-0
```
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
@ -291,7 +289,7 @@ EOF
## The Kubernetes Frontend Load Balancer
In this section you will provision an Nginx load balancer to front the Kubernetes API Servers. The load balancer will listen on the public IP address (on the `gateway-01` VM).
In this section you will provision an Nginx load balancer to front the Kubernetes API Servers. The load balancer will listen on the private and the public IP address (on the `gateway-01` VM).
### Provision an Nginx Load Balancer
@ -302,7 +300,7 @@ sudo apt-get update
sudo apt-get install -y nginx
```
Create the Nginx load balancer network configuration:
As **root** user, Create the Nginx load balancer network configuration:
```bash
cat <<EOF >> /etc/nginx/nginx.conf
@ -315,7 +313,7 @@ stream {
server {
listen 6443;
proxy_pass controller_backend;
health_check;
# health_check; # Only Nginx commercial subscription can use this directive...
}
}
EOF

View File

@ -7,7 +7,7 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `ssh` command. Example:
```bash
ssh worker-0
ssh root@worker-0
```
### Running commands in parallel with tmux
@ -41,7 +41,7 @@ If output is empthy then swap is not enabled. If swap is enabled run the followi
sudo swapoff -a
```
> To ensure swap remains off after reboot consult your Linux distro documentation.
> To ensure swap remains off after reboot consult your Linux distro documentation. You may need to comment the Swap line in the `/etc/fstab` file.
### Download and Install Worker Binaries

View File

@ -24,14 +24,43 @@ List the routes in the `kubernetes-the-hard-way` VPC network:
ip route
```
> Output:
> Output (example for worker-0):
```bash
default via 192.168.8.1 dev ens18 proto static
10.200.0.0/24 dev cnio0 proto kernel scope link src 10.200.0.1 linkdown
10.200.0.0/24 via 192.168.8.20 dev ens18
10.200.1.0/24 via 192.168.8.21 dev ens18
10.200.2.0/24 via 192.168.8.22 dev ens18
192.168.8.0/24 dev ens18 proto kernel scope link src 192.168.8.21
```
To make it persistent (if reboot), you need to edit your network configuration (depends on your Linux distribution).
Example for **Ubuntu 18.04** and higher:
```bash
vi /etc/netplan/00-installer-config.yaml
```
> Content (example for worker-0, **don't specify the POD CIDR associated with the current node**):
```bash
# This is the network config written by 'subiquity'
network:
ethernets:
ens18:
addresses:
- 192.168.8.10/24
gateway4: 192.168.8.1
nameservers:
addresses:
- 9.9.9.9
routes:
- to: 10.200.1.0/24
via: 192.168.8.21
- to: 10.200.2.0/24
via: 192.168.8.22
version: 2
```
Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)

View File

@ -27,7 +27,7 @@ List the pods created by the `kube-dns` deployment:
kubectl get pods -l k8s-app=kube-dns -n kube-system
```
> Output:
> Output (you may need to wait a few seconds to see the pods "READY"):
```bash
NAME READY STATUS RESTARTS AGE
@ -49,7 +49,7 @@ List the pod created by the `busybox` deployment:
kubectl get pods -l run=busybox
```
> Output:
> Output (you may need to wait a few seconds to see the pod "READY"):
```bash
NAME READY STATUS RESTARTS AGE

View File

@ -63,7 +63,7 @@ List the pod created by the `nginx` deployment:
kubectl get pods -l app=nginx
```
> Output:
> Output (you may need to wait a few seconds to see the pod "READY"):
```bash
NAME READY STATUS RESTARTS AGE
@ -103,13 +103,13 @@ curl --head http://127.0.0.1:8080
```bash
HTTP/1.1 200 OK
Server: nginx/1.17.3
Date: Sat, 14 Sep 2019 21:10:11 GMT
Server: nginx/1.19.0
Date: Mon, 22 Jun 2020 10:34:51 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Aug 2019 08:50:00 GMT
Last-Modified: Tue, 26 May 2020 15:00:20 GMT
Connection: keep-alive
ETag: "5d5279b8-264"
ETag: "5ecd2f04-264"
Accept-Ranges: bytes
```
@ -135,7 +135,7 @@ kubectl logs $POD_NAME
> Output:
```bash
127.0.0.1 - - [14/Sep/2019:21:10:11 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.52.1" "-"
127.0.0.1 - - [22/Jun/2020:10:34:51 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-"
```
### Exec
@ -151,7 +151,7 @@ kubectl exec -ti $POD_NAME -- nginx -v
> Output:
```bash
nginx version: nginx/1.17.3
nginx version: nginx/1.19.0
```
## Services
@ -176,7 +176,7 @@ NODE_PORT=$(kubectl get svc nginx \
Define the Kubernetes network IP address of a worker instance (replace MY_WORKER_IP with the private IP defined on a worker):
```bash
EXTERNAL_IP=MY_WORKER_IP
NODE_IP=MY_WORKER_IP
```
> Example for worker-0: 192.168.8.20
@ -184,20 +184,20 @@ EXTERNAL_IP=MY_WORKER_IP
Make an HTTP request using the external IP address and the `nginx` node port:
```bash
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
curl -I http://${NODE_IP}:${NODE_PORT}
```
> Output:
```bash
HTTP/1.1 200 OK
Server: nginx/1.17.3
Date: Sat, 14 Sep 2019 21:12:35 GMT
Server: nginx/1.19.0
Date: Mon, 22 Jun 2020 10:38:31 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Aug 2019 08:50:00 GMT
Last-Modified: Tue, 26 May 2020 15:00:20 GMT
Connection: keep-alive
ETag: "5d5279b8-264"
ETag: "5ecd2f04-264"
Accept-Ranges: bytes
```