diff --git a/deployments/coredns.yaml b/deployments/coredns.yaml index 352dcd4..23cdafd 100644 --- a/deployments/coredns.yaml +++ b/deployments/coredns.yaml @@ -62,7 +62,7 @@ data: loadbalance } --- -apiVersion: extensions/v1beta1 +apiVersion: apps/v1 kind: Deployment metadata: name: coredns diff --git a/docs/01-prerequisites.md b/docs/01-prerequisites.md index d4d679d..320c700 100644 --- a/docs/01-prerequisites.md +++ b/docs/01-prerequisites.md @@ -2,7 +2,7 @@ ## VM Hardware Requirements -8 GB of RAM (Preferebly 16 GB) +8 GB of RAM (Preferably 16 GB) 50 GB Disk space ## Virtual Box @@ -26,6 +26,3 @@ Download and Install [Vagrant](https://www.vagrantup.com/) on your platform. - Centos - Linux - macOS -- Arch Linux - -Next: [Compute Resources](02-compute-resources.md) \ No newline at end of file diff --git a/docs/02-compute-resources.md b/docs/02-compute-resources.md index d07d7db..eb76199 100644 --- a/docs/02-compute-resources.md +++ b/docs/02-compute-resources.md @@ -18,7 +18,9 @@ Run Vagrant up This does the below: - Deploys 5 VMs - 2 Master, 2 Worker and 1 Loadbalancer with the name 'kubernetes-ha-* ' - > This is the default settings. This can be changed at the top of the Vagrant file + > This is the default settings. This can be changed at the top of the Vagrant file. + > If you choose to change these settings, please also update vagrant/ubuntu/vagrant/setup-hosts.sh + > to add the additional hosts to the /etc/hosts default before running "vagrant up". - Set's IP addresses in the range 192.168.5 @@ -73,7 +75,7 @@ Vagrant generates a private key for each of these VMs. It is placed under the .v ## Troubleshooting Tips -If any of the VMs failed to provision, or is not configured correct, delete the vm using the command: +1. If any of the VMs failed to provision, or is not configured correct, delete the vm using the command: `vagrant destroy ` @@ -97,6 +99,4 @@ In such cases delete the VM, then delete the VM folder and then re-provision `rmdir "\kubernetes-ha-worker-2"` `vagrant up` - -Next: [Client Tools](03-client-tools.md) \ No newline at end of file diff --git a/docs/05-kubernetes-configuration-files.md b/docs/05-kubernetes-configuration-files.md index 7dec4ff..4d945d8 100644 --- a/docs/05-kubernetes-configuration-files.md +++ b/docs/05-kubernetes-configuration-files.md @@ -45,10 +45,9 @@ Results: ``` kube-proxy.kubeconfig - +``` Reference docs for kube-proxy [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/) -``` ### The kube-controller-manager Kubernetes Configuration File diff --git a/docs/06-data-encryption-keys.md b/docs/06-data-encryption-keys.md index 4d6392c..f1f155a 100644 --- a/docs/06-data-encryption-keys.md +++ b/docs/06-data-encryption-keys.md @@ -39,6 +39,15 @@ for instance in master-1 master-2; do scp encryption-config.yaml ${instance}:~/ done ``` + +Move `encryption-config.yaml` encryption config file to appropriate directory. + +``` +for instance in master-1 master-2; do + ssh ${instance} sudo mv encryption-config.yaml /var/lib/kubernetes/ +done +``` + Reference: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md) diff --git a/docs/07-bootstrapping-etcd.md b/docs/07-bootstrapping-etcd.md index 2a01a4c..0fa2027 100644 --- a/docs/07-bootstrapping-etcd.md +++ b/docs/07-bootstrapping-etcd.md @@ -8,7 +8,7 @@ The commands in this lab must be run on each controller instance: `master-1`, an ### Running commands in parallel with tmux -[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab. +[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. ## Bootstrapping an etcd Cluster Member diff --git a/docs/08-bootstrapping-kubernetes-controllers.md b/docs/08-bootstrapping-kubernetes-controllers.md index 761aba8..8b3d358 100644 --- a/docs/08-bootstrapping-kubernetes-controllers.md +++ b/docs/08-bootstrapping-kubernetes-controllers.md @@ -78,7 +78,7 @@ Documentation=https://github.com/kubernetes/kubernetes ExecStart=/usr/local/bin/kube-apiserver \\ --advertise-address=${INTERNAL_IP} \\ --allow-privileged=true \\ - --apiserver-count=3 \\ + --apiserver-count=2 \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ @@ -99,7 +99,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\ --kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\ --kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\ --kubelet-https=true \\ - --runtime-config=api/all \\ + --runtime-config=api/all=true \\ --service-account-key-file=/var/lib/kubernetes/service-account.crt \\ --service-cluster-ip-range=10.96.0.0/24 \\ --service-node-port-range=30000-32767 \\ diff --git a/docs/10-tls-bootstrapping-kubernetes-workers.md b/docs/10-tls-bootstrapping-kubernetes-workers.md index 2f2e1d1..48d1ec6 100644 --- a/docs/10-tls-bootstrapping-kubernetes-workers.md +++ b/docs/10-tls-bootstrapping-kubernetes-workers.md @@ -14,7 +14,11 @@ This is not a practical approach when you have 1000s of nodes in the cluster, an - The Nodes can retrieve the signed certificate from the Kubernetes CA - The Nodes can generate a kube-config file using this certificate by themselves - The Nodes can start and join the cluster by themselves -- The Nodes can renew certificates when they expire by themselves +- The Nodes can request new certificates via a CSR, but the CSR must be manually approved by a cluster administrator + +In Kubernetes 1.11 a patch was merged to require administrator or Controller approval of node serving CSRs for security reasons. + +Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#certificate-rotation So let's get started! @@ -39,16 +43,13 @@ So let's get started! Copy the ca certificate to the worker node: -``` -scp ca.crt worker-2:~/ -``` ## Step 1 Configure the Binaries on the Worker node ### Download and Install Worker Binaries ``` -wget -q --show-progress --https-only --timestamping \ +worker-2$ wget -q --show-progress --https-only --timestamping \ https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \ https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy \ https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubelet @@ -59,7 +60,7 @@ Reference: https://kubernetes.io/docs/setup/release/#node-binaries Create the installation directories: ``` -sudo mkdir -p \ +worker-2$ sudo mkdir -p \ /etc/cni/net.d \ /opt/cni/bin \ /var/lib/kubelet \ @@ -78,7 +79,7 @@ Install the worker binaries: ``` ### Move the ca certificate -`sudo mv ca.crt /var/lib/kubernetes/` +`worker-2$ sudo mv ca.crt /var/lib/kubernetes/` # Step 1 Create the Boostrap Token to be used by Nodes(Kubelets) to invoke Certificate API @@ -86,10 +87,10 @@ For the workers(kubelet) to access the Certificates API, they need to authentica Bootstrap Tokens take the form of a 6 character token id followed by 16 character token secret separated by a dot. Eg: abcdef.0123456789abcdef. More formally, they must match the regular expression [a-z0-9]{6}\.[a-z0-9]{16} -Bootstrap Tokens are created as a secret in the kube-system namespace. + ``` -cat > bootstrap-token-07401b.yaml < bootstrap-token-07401b.yaml < csrs-for-bootstrapping.yaml < csrs-for-bootstrapping.yaml < auto-approve-csrs-for-group.yaml < auto-approve-csrs-for-group.yaml < auto-approve-renewals-for-nodes.yaml < auto-approve-renewals-for-nodes.yaml < Note: You don't really need to update data directory and volumeMounts.mountPath path above. You could simply just update the hostPath.path in the volumes section to point to the new directory. But if you are not working with a kubeadm deployed cluster, then you might have to update the data directory. That's why I left it as is. + +When this file is updated, the ETCD pod is automatically re-created as this is a static pod placed under the `/etc/kubernetes/manifests` directory. + + +> Note: as the ETCD pod has changed it will automatically restart, and also kube-controller-manager and kube-scheduler. Wait 1-2 to mins for this pods to restart. You can make a `watch "docker ps | grep etcd"` to see when the ETCD pod is restarted. + +> Note2: If the etcd pod is not getting `Ready 1/1`, then restart it by `kubectl delete pod -n kube-system etcd-controlplane` and wait 1 minute. + +> Note3: This is the simplest way to make sure that ETCD uses the restored data after the ETCD pod is recreated. You **don't** have to change anything else. + + **If** you do change **--data-dir** to **/var/lib/etcd-from-backup** in the YAML file, make sure that the **volumeMounts** for **etcd-data** is updated as well, with the mountPath pointing to /var/lib/etcd-from-backup (**THIS COMPLETE STEP IS OPTIONAL AND NEED NOT BE DONE FOR COMPLETING THE RESTORE**) diff --git a/tools/kubernetes-certs-checker.xlsx b/tools/kubernetes-certs-checker.xlsx index f385c66..009ecd6 100644 Binary files a/tools/kubernetes-certs-checker.xlsx and b/tools/kubernetes-certs-checker.xlsx differ diff --git a/vagrant/ubuntu/cert_verify.sh b/vagrant/ubuntu/cert_verify.sh index 705a53f..dc104e0 100755 --- a/vagrant/ubuntu/cert_verify.sh +++ b/vagrant/ubuntu/cert_verify.sh @@ -310,8 +310,8 @@ check_cert_kpkubeconfig() elif [ -f $KPKUBECONFIG ] then printf "${NC}kube-proxy kubeconfig file found, verifying the authenticity\n" - KPKUBECONFIG_SUBJECT=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ") - KPKUBECONFIG_ISSUER=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ") + KPKUBECONFIG_SUBJECT=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ") + KPKUBECONFIG_ISSUER=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ") KPKUBECONFIG_CERT_MD5=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}') KPKUBECONFIG_KEY_MD5=$(cat $KPKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}') KPKUBECONFIG_SERVER=$(cat $KPKUBECONFIG | grep "server:"| awk '{print $2}') @@ -337,8 +337,8 @@ check_cert_kcmkubeconfig() elif [ -f $KCMKUBECONFIG ] then printf "${NC}kube-controller-manager kubeconfig file found, verifying the authenticity\n" - KCMKUBECONFIG_SUBJECT=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ") - KCMKUBECONFIG_ISSUER=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ") + KCMKUBECONFIG_SUBJECT=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ") + KCMKUBECONFIG_ISSUER=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ") KCMKUBECONFIG_CERT_MD5=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}') KCMKUBECONFIG_KEY_MD5=$(cat $KCMKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}') KCMKUBECONFIG_SERVER=$(cat $KCMKUBECONFIG | grep "server:"| awk '{print $2}') @@ -365,8 +365,8 @@ check_cert_kskubeconfig() elif [ -f $KSKUBECONFIG ] then printf "${NC}kube-scheduler kubeconfig file found, verifying the authenticity\n" - KSKUBECONFIG_SUBJECT=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ") - KSKUBECONFIG_ISSUER=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ") + KSKUBECONFIG_SUBJECT=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ") + KSKUBECONFIG_ISSUER=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ") KSKUBECONFIG_CERT_MD5=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}') KSKUBECONFIG_KEY_MD5=$(cat $KSKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}') KSKUBECONFIG_SERVER=$(cat $KSKUBECONFIG | grep "server:"| awk '{print $2}') @@ -392,8 +392,8 @@ check_cert_adminkubeconfig() elif [ -f $ADMINKUBECONFIG ] then printf "${NC}admin kubeconfig file found, verifying the authenticity\n" - ADMINKUBECONFIG_SUBJECT=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ") - ADMINKUBECONFIG_ISSUER=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ") + ADMINKUBECONFIG_SUBJECT=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ") + ADMINKUBECONFIG_ISSUER=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ") ADMINKUBECONFIG_CERT_MD5=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}') ADMINKUBECONFIG_KEY_MD5=$(cat $ADMINKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}') ADMINKUBECONFIG_SERVER=$(cat $ADMINKUBECONFIG | grep "server:"| awk '{print $2}') @@ -611,8 +611,8 @@ check_cert_worker_1_kubeconfig() elif [ -f $WORKER_1_KUBECONFIG ] then printf "${NC}worker-1 kubeconfig file found, verifying the authenticity\n" - WORKER_1_KUBECONFIG_SUBJECT=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ") - WORKER_1_KUBECONFIG_ISSUER=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ") + WORKER_1_KUBECONFIG_SUBJECT=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ") + WORKER_1_KUBECONFIG_ISSUER=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ") WORKER_1_KUBECONFIG_CERT_MD5=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}') WORKER_1_KUBECONFIG_KEY_MD5=$(cat $WORKER_1_KUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}') WORKER_1_KUBECONFIG_SERVER=$(cat $WORKER_1_KUBECONFIG | grep "server:"| awk '{print $2}') @@ -769,4 +769,4 @@ case $value in printf "${FAILED}Exiting.... Please select the valid option either 1 or 2\n" exit 1 ;; -esac \ No newline at end of file +esac