Merge branch 'master' into patch-1
commit
acee279989
|
@ -1,11 +1,5 @@
|
|||
> This tutorial is a modified version of the original developed by [Kelsey Hightower](https://github.com/kelseyhightower/kubernetes-the-hard-way).
|
||||
|
||||
This repository holds the supporting material for the [Certified Kubernetes Administrators Course](https://kodekloud.com/p/certified-kubernetes-administrator-with-practice-tests). There are two major sections.
|
||||
|
||||
- [Kubernetes The Hard Way on VirtualBox](#kubernetes-the-hard-way-on-virtualbox)
|
||||
- [Answers to Practice Tests hosted on KodeKloud](/practice-questions-answers)
|
||||
|
||||
|
||||
# Kubernetes The Hard Way On VirtualBox
|
||||
|
||||
This tutorial walks you through setting up Kubernetes the hard way on a local machine using VirtualBox.
|
||||
|
@ -55,3 +49,4 @@ Kubernetes The Hard Way guides you through bootstrapping a highly available Kube
|
|||
* [Smoke Test](docs/15-smoke-test.md)
|
||||
* [E2E Test](docs/16-e2e-tests.md)
|
||||
* [Extra - Dynamic Kubelet Configuration](docs/17-extra-dynamic-kubelet-configuration.md)
|
||||
* [Extra - Certificate Verification](docs/verify-certificates.md)
|
||||
|
|
|
@ -62,7 +62,7 @@ data:
|
|||
loadbalance
|
||||
}
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: coredns
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
## VM Hardware Requirements
|
||||
|
||||
8 GB of RAM (Preferebly 16 GB)
|
||||
8 GB of RAM (Preferably 16 GB)
|
||||
50 GB Disk space
|
||||
|
||||
## Virtual Box
|
||||
|
@ -28,14 +28,3 @@ Download and Install [Vagrant](https://www.vagrantup.com/) on your platform.
|
|||
- macOS
|
||||
- Arch Linux
|
||||
|
||||
## Running Commands in Parallel with tmux
|
||||
|
||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with synchronize-panes enabled to speed up the provisioning process.
|
||||
|
||||
> The use of tmux is optional and not required to complete this tutorial.
|
||||
|
||||

|
||||
|
||||
> Enable synchronize-panes by pressing `ctrl+b` followed by `shift+:`. Next type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
|
||||
|
||||
Next: [Installing the Client Tools](02-client-tools.md)
|
||||
|
|
|
@ -18,7 +18,9 @@ Run Vagrant up
|
|||
This does the below:
|
||||
|
||||
- Deploys 5 VMs - 2 Master, 2 Worker and 1 Loadbalancer with the name 'kubernetes-ha-* '
|
||||
> This is the default settings. This can be changed at the top of the Vagrant file
|
||||
> This is the default settings. This can be changed at the top of the Vagrant file.
|
||||
> If you choose to change these settings, please also update vagrant/ubuntu/vagrant/setup-hosts.sh
|
||||
> to add the additional hosts to the /etc/hosts default before running "vagrant up".
|
||||
|
||||
- Set's IP addresses in the range 192.168.5
|
||||
|
||||
|
@ -73,7 +75,7 @@ Vagrant generates a private key for each of these VMs. It is placed under the .v
|
|||
|
||||
## Troubleshooting Tips
|
||||
|
||||
If any of the VMs failed to provision, or is not configured correct, delete the vm using the command:
|
||||
1. If any of the VMs failed to provision, or is not configured correct, delete the vm using the command:
|
||||
|
||||
`vagrant destroy <vm>`
|
||||
|
||||
|
@ -90,10 +92,18 @@ VirtualBox error:
|
|||
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component SessionMachine, interface IMachine, callee IUnknown
|
||||
VBoxManage.exe: error: Context: "SaveSettings()" at line 3105 of file VBoxManageModifyVM.cpp
|
||||
|
||||
In such cases delete the VM, then delete teh VM folder and then re-provision
|
||||
In such cases delete the VM, then delete the VM folder and then re-provision
|
||||
|
||||
`vagrant destroy <vm>`
|
||||
|
||||
`rmdir "<path-to-vm-folder>\kubernetes-ha-worker-2"`
|
||||
|
||||
`vagrant up`
|
||||
|
||||
2. When you try "sysctl net.bridge.bridge-nf-call-iptables=1", it would sometimes return "sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory" error. The below would resolve the issue.
|
||||
|
||||
`modprobe br_netfilter`
|
||||
|
||||
`sysctl -p /etc/sysctl.conf`
|
||||
|
||||
`net.bridge.bridge-nf-call-iptables=1`
|
||||
|
|
|
@ -20,6 +20,9 @@ Create a CA certificate, then generate a Certificate Signing Request and use it
|
|||
# Create private key for CA
|
||||
openssl genrsa -out ca.key 2048
|
||||
|
||||
# Comment line starting with RANDFILE in /etc/ssl/openssl.cnf definition to avoid permission issues
|
||||
sudo sed -i '0,/RANDFILE/{s/RANDFILE/\#&/}' /etc/ssl/openssl.cnf
|
||||
|
||||
# Create CSR using the private key
|
||||
openssl req -new -key ca.key -subj "/CN=KUBERNETES-CA" -out ca.csr
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ In this lab you will generate [Kubernetes configuration files](https://kubernete
|
|||
|
||||
## Client Authentication Configs
|
||||
|
||||
In this section you will generate kubeconfig files for the `controller manager`, `kubelet`, `kube-proxy`, and `scheduler` clients and the `admin` user.
|
||||
In this section you will generate kubeconfig files for the `controller manager`, `kube-proxy`, `scheduler` clients and the `admin` user.
|
||||
|
||||
### Kubernetes Public IP Address
|
||||
|
||||
|
@ -45,10 +45,9 @@ Results:
|
|||
|
||||
```
|
||||
kube-proxy.kubeconfig
|
||||
|
||||
```
|
||||
|
||||
Reference docs for kube-proxy [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/)
|
||||
```
|
||||
|
||||
### The kube-controller-manager Kubernetes Configuration File
|
||||
|
||||
|
@ -167,7 +166,7 @@ for instance in worker-1 worker-2; do
|
|||
done
|
||||
```
|
||||
|
||||
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
|
||||
Copy the appropriate `admin.kubeconfig`, `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
|
||||
|
||||
```
|
||||
for instance in master-1 master-2; do
|
||||
|
|
|
@ -39,6 +39,15 @@ for instance in master-1 master-2; do
|
|||
scp encryption-config.yaml ${instance}:~/
|
||||
done
|
||||
```
|
||||
|
||||
Move `encryption-config.yaml` encryption config file to appropriate directory.
|
||||
|
||||
```
|
||||
for instance in master-1 master-2; do
|
||||
ssh ${instance} sudo mv encryption-config.yaml /var/lib/kubernetes/
|
||||
done
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data
|
||||
|
||||
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
|
||||
|
|
|
@ -8,7 +8,7 @@ The commands in this lab must be run on each controller instance: `master-1`, an
|
|||
|
||||
### Running commands in parallel with tmux
|
||||
|
||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
|
||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time.
|
||||
|
||||
## Bootstrapping an etcd Cluster Member
|
||||
|
||||
|
|
|
@ -78,7 +78,7 @@ Documentation=https://github.com/kubernetes/kubernetes
|
|||
ExecStart=/usr/local/bin/kube-apiserver \\
|
||||
--advertise-address=${INTERNAL_IP} \\
|
||||
--allow-privileged=true \\
|
||||
--apiserver-count=3 \\
|
||||
--apiserver-count=2 \\
|
||||
--audit-log-maxage=30 \\
|
||||
--audit-log-maxbackup=3 \\
|
||||
--audit-log-maxsize=100 \\
|
||||
|
@ -99,7 +99,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\
|
|||
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
|
||||
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
|
||||
--kubelet-https=true \\
|
||||
--runtime-config=api/all \\
|
||||
--runtime-config=api/all=true \\
|
||||
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
|
||||
--service-cluster-ip-range=10.96.0.0/24 \\
|
||||
--service-node-port-range=30000-32767 \\
|
||||
|
@ -116,10 +116,10 @@ EOF
|
|||
|
||||
### Configure the Kubernetes Controller Manager
|
||||
|
||||
Move the `kube-controller-manager` kubeconfig into place:
|
||||
Copy the `kube-controller-manager` kubeconfig into place:
|
||||
|
||||
```
|
||||
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
||||
sudo cp kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
||||
```
|
||||
|
||||
Create the `kube-controller-manager.service` systemd unit file:
|
||||
|
@ -154,10 +154,10 @@ EOF
|
|||
|
||||
### Configure the Kubernetes Scheduler
|
||||
|
||||
Move the `kube-scheduler` kubeconfig into place:
|
||||
Copy the `kube-scheduler` kubeconfig into place:
|
||||
|
||||
```
|
||||
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
||||
sudo cp kube-scheduler.kubeconfig /var/lib/kubernetes/
|
||||
```
|
||||
|
||||
Create the `kube-scheduler.service` systemd unit file:
|
||||
|
@ -218,6 +218,8 @@ In this section you will provision an external load balancer to front the Kubern
|
|||
|
||||
### Provision a Network Load Balancer
|
||||
|
||||
Login to `loadbalancer` instance using SSH Terminal.
|
||||
|
||||
```
|
||||
#Install HAProxy
|
||||
loadbalancer# sudo apt-get update && sudo apt-get install -y haproxy
|
||||
|
|
|
@ -8,7 +8,8 @@ We will now install the kubernetes components
|
|||
|
||||
## Prerequisites
|
||||
|
||||
The commands in this lab must be run on first worker instance: `worker-1`. Login to first worker instance using SSH Terminal.
|
||||
The Certificates and Configuration are created on `master-1` node and then copied over to workers using `scp`.
|
||||
Once this is done, the commands are to be run on first worker instance: `worker-1`. Login to first worker instance using SSH Terminal.
|
||||
|
||||
### Provisioning Kubelet Client Certificates
|
||||
|
||||
|
@ -16,7 +17,7 @@ Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/doc
|
|||
|
||||
Generate a certificate and private key for one worker node:
|
||||
|
||||
Worker1:
|
||||
On master-1:
|
||||
|
||||
```
|
||||
master-1$ cat > openssl-worker-1.cnf <<EOF
|
||||
|
@ -54,8 +55,9 @@ Get the kub-api server load-balancer IP.
|
|||
LOADBALANCER_ADDRESS=192.168.5.30
|
||||
```
|
||||
|
||||
Generate a kubeconfig file for the first worker node:
|
||||
Generate a kubeconfig file for the first worker node.
|
||||
|
||||
On master-1:
|
||||
```
|
||||
{
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
|
@ -86,7 +88,7 @@ worker-1.kubeconfig
|
|||
```
|
||||
|
||||
### Copy certificates, private keys and kubeconfig files to the worker node:
|
||||
|
||||
On master-1:
|
||||
```
|
||||
master-1$ scp ca.crt worker-1.crt worker-1.key worker-1.kubeconfig worker-1:~/
|
||||
```
|
||||
|
@ -95,6 +97,7 @@ master-1$ scp ca.crt worker-1.crt worker-1.key worker-1.kubeconfig worker-1:~/
|
|||
|
||||
Going forward all activities are to be done on the `worker-1` node.
|
||||
|
||||
On worker-1:
|
||||
```
|
||||
worker-1$ wget -q --show-progress --https-only --timestamping \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \
|
||||
|
@ -126,7 +129,7 @@ Install the worker binaries:
|
|||
```
|
||||
|
||||
### Configure the Kubelet
|
||||
|
||||
On worker-1:
|
||||
```
|
||||
{
|
||||
sudo mv ${HOSTNAME}.key ${HOSTNAME}.crt /var/lib/kubelet/
|
||||
|
@ -189,7 +192,7 @@ EOF
|
|||
```
|
||||
|
||||
### Configure the Kubernetes Proxy
|
||||
|
||||
On worker-1:
|
||||
```
|
||||
worker-1$ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
||||
```
|
||||
|
@ -227,7 +230,7 @@ EOF
|
|||
```
|
||||
|
||||
### Start the Worker Services
|
||||
|
||||
On worker-1:
|
||||
```
|
||||
{
|
||||
sudo systemctl daemon-reload
|
||||
|
@ -239,7 +242,7 @@ EOF
|
|||
> Remember to run the above commands on worker node: `worker-1`
|
||||
|
||||
## Verification
|
||||
|
||||
On master-1:
|
||||
|
||||
List the registered Kubernetes nodes from the master node:
|
||||
|
||||
|
@ -257,4 +260,6 @@ worker-1 NotReady <none> 93s v1.13.0
|
|||
> Note: It is OK for the worker node to be in a NotReady state.
|
||||
That is because we haven't configured Networking yet.
|
||||
|
||||
Optional: At this point you may run the certificate verification script to make sure all certificates are configured correctly. Follow the instructions [here](verify-certificates.md)
|
||||
|
||||
Next: [TLS Bootstrapping Kubernetes Workers](10-tls-bootstrapping-kubernetes-workers.md)
|
||||
|
|
|
@ -14,7 +14,11 @@ This is not a practical approach when you have 1000s of nodes in the cluster, an
|
|||
- The Nodes can retrieve the signed certificate from the Kubernetes CA
|
||||
- The Nodes can generate a kube-config file using this certificate by themselves
|
||||
- The Nodes can start and join the cluster by themselves
|
||||
- The Nodes can renew certificates when they expire by themselves
|
||||
- The Nodes can request new certificates via a CSR, but the CSR must be manually approved by a cluster administrator
|
||||
|
||||
In Kubernetes 1.11 a patch was merged to require administrator or Controller approval of node serving CSRs for security reasons.
|
||||
|
||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#certificate-rotation
|
||||
|
||||
So let's get started!
|
||||
|
||||
|
@ -39,16 +43,13 @@ So let's get started!
|
|||
|
||||
Copy the ca certificate to the worker node:
|
||||
|
||||
```
|
||||
scp ca.crt worker-2:~/
|
||||
```
|
||||
|
||||
## Step 1 Configure the Binaries on the Worker node
|
||||
|
||||
### Download and Install Worker Binaries
|
||||
|
||||
```
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
worker-2$ wget -q --show-progress --https-only --timestamping \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubelet
|
||||
|
@ -59,7 +60,7 @@ Reference: https://kubernetes.io/docs/setup/release/#node-binaries
|
|||
Create the installation directories:
|
||||
|
||||
```
|
||||
sudo mkdir -p \
|
||||
worker-2$ sudo mkdir -p \
|
||||
/etc/cni/net.d \
|
||||
/opt/cni/bin \
|
||||
/var/lib/kubelet \
|
||||
|
@ -78,7 +79,7 @@ Install the worker binaries:
|
|||
```
|
||||
### Move the ca certificate
|
||||
|
||||
`sudo mv ca.crt /var/lib/kubernetes/`
|
||||
`worker-2$ sudo mv ca.crt /var/lib/kubernetes/`
|
||||
|
||||
# Step 1 Create the Boostrap Token to be used by Nodes(Kubelets) to invoke Certificate API
|
||||
|
||||
|
@ -86,10 +87,10 @@ For the workers(kubelet) to access the Certificates API, they need to authentica
|
|||
|
||||
Bootstrap Tokens take the form of a 6 character token id followed by 16 character token secret separated by a dot. Eg: abcdef.0123456789abcdef. More formally, they must match the regular expression [a-z0-9]{6}\.[a-z0-9]{16}
|
||||
|
||||
Bootstrap Tokens are created as a secret in the kube-system namespace.
|
||||
|
||||
|
||||
```
|
||||
cat > bootstrap-token-07401b.yaml <<EOF
|
||||
master-1$ cat > bootstrap-token-07401b.yaml <<EOF
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
|
@ -119,7 +120,7 @@ stringData:
|
|||
EOF
|
||||
|
||||
|
||||
kubectl create -f bootstrap-token-07401b.yaml
|
||||
master-1$ kubectl create -f bootstrap-token-07401b.yaml
|
||||
|
||||
```
|
||||
|
||||
|
@ -136,11 +137,11 @@ Reference: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tok
|
|||
Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet
|
||||
|
||||
```
|
||||
kubectl create clusterrolebinding create-csrs-for-bootstrapping --clusterrole=system:node-bootstrapper --group=system:bootstrappers
|
||||
master-1$ kubectl create clusterrolebinding create-csrs-for-bootstrapping --clusterrole=system:node-bootstrapper --group=system:bootstrappers
|
||||
|
||||
--------------- OR ---------------
|
||||
|
||||
cat > csrs-for-bootstrapping.yaml <<EOF
|
||||
master-1$ cat > csrs-for-bootstrapping.yaml <<EOF
|
||||
# enable bootstrapping nodes to create CSR
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
|
@ -157,18 +158,18 @@ roleRef:
|
|||
EOF
|
||||
|
||||
|
||||
kubectl create -f csrs-for-bootstrapping.yaml
|
||||
master-1$ kubectl create -f csrs-for-bootstrapping.yaml
|
||||
|
||||
```
|
||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#authorize-kubelet-to-create-csr
|
||||
|
||||
## Step 3 Authorize workers(kubelets) to approve CSR
|
||||
```
|
||||
kubectl create clusterrolebinding auto-approve-csrs-for-group --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers
|
||||
master-1$ kubectl create clusterrolebinding auto-approve-csrs-for-group --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers
|
||||
|
||||
--------------- OR ---------------
|
||||
|
||||
cat > auto-approve-csrs-for-group.yaml <<EOF
|
||||
master-1$ cat > auto-approve-csrs-for-group.yaml <<EOF
|
||||
# Approve all CSRs for the group "system:bootstrappers"
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
|
@ -185,7 +186,7 @@ roleRef:
|
|||
EOF
|
||||
|
||||
|
||||
kubectl create -f auto-approve-csrs-for-group.yaml
|
||||
master-1$ kubectl create -f auto-approve-csrs-for-group.yaml
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval
|
||||
|
@ -195,11 +196,11 @@ Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kub
|
|||
We now create the Cluster Role Binding required for the nodes to automatically renew the certificates on expiry. Note that we are NOT using the **system:bootstrappers** group here any more. Since by the renewal period, we believe the node would be bootstrapped and part of the cluster already. All nodes are part of the **system:nodes** group.
|
||||
|
||||
```
|
||||
kubectl create clusterrolebinding auto-approve-renewals-for-nodes --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
|
||||
master-1$ kubectl create clusterrolebinding auto-approve-renewals-for-nodes --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
|
||||
|
||||
--------------- OR ---------------
|
||||
|
||||
cat > auto-approve-renewals-for-nodes.yaml <<EOF
|
||||
master-1$ cat > auto-approve-renewals-for-nodes.yaml <<EOF
|
||||
# Approve renewal CSRs for the group "system:nodes"
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
|
@ -216,7 +217,7 @@ roleRef:
|
|||
EOF
|
||||
|
||||
|
||||
kubectl create -f auto-approve-renewals-for-nodes.yaml
|
||||
master-1$ kubectl create -f auto-approve-renewals-for-nodes.yaml
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval
|
||||
|
@ -231,7 +232,7 @@ Here, we don't have the certificates yet. So we cannot create a kubeconfig file.
|
|||
This is to be done on the `worker-2` node.
|
||||
|
||||
```
|
||||
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://192.168.5.30:6443' --certificate-authority=/var/lib/kubernetes/ca.crt
|
||||
worker-2$ sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://192.168.5.30:6443' --certificate-authority=/var/lib/kubernetes/ca.crt
|
||||
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d
|
||||
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap
|
||||
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-context bootstrap
|
||||
|
@ -240,7 +241,7 @@ sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-conte
|
|||
Or
|
||||
|
||||
```
|
||||
cat <<EOF | sudo tee /var/lib/kubelet/bootstrap-kubeconfig
|
||||
worker-2$ cat <<EOF | sudo tee /var/lib/kubelet/bootstrap-kubeconfig
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
|
@ -269,7 +270,7 @@ Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kub
|
|||
Create the `kubelet-config.yaml` configuration file:
|
||||
|
||||
```
|
||||
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
||||
worker-2$ cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
||||
kind: KubeletConfiguration
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
authentication:
|
||||
|
@ -296,7 +297,7 @@ EOF
|
|||
Create the `kubelet.service` systemd unit file:
|
||||
|
||||
```
|
||||
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
||||
worker-2$ cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
||||
[Unit]
|
||||
Description=Kubernetes Kubelet
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
|
@ -311,7 +312,6 @@ ExecStart=/usr/local/bin/kubelet \\
|
|||
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
||||
--cert-dir=/var/lib/kubelet/pki/ \\
|
||||
--rotate-certificates=true \\
|
||||
--rotate-server-certificates=true \\
|
||||
--network-plugin=cni \\
|
||||
--register-node=true \\
|
||||
--v=2
|
||||
|
@ -327,18 +327,19 @@ Things to note here:
|
|||
- **bootstrap-kubeconfig**: Location of the bootstrap-kubeconfig file.
|
||||
- **cert-dir**: The directory where the generated certificates are stored.
|
||||
- **rotate-certificates**: Rotates client certificates when they expire.
|
||||
- **rotate-server-certificates**: Requests for server certificates on bootstrap and rotates them when they expire.
|
||||
|
||||
## Step 7 Configure the Kubernetes Proxy
|
||||
|
||||
In one of the previous steps we created the kube-proxy.kubeconfig file. Check [here](https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md) if you missed it.
|
||||
|
||||
```
|
||||
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
||||
worker-2$ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
||||
```
|
||||
|
||||
Create the `kube-proxy-config.yaml` configuration file:
|
||||
|
||||
```
|
||||
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
|
||||
worker-2$ cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
|
||||
kind: KubeProxyConfiguration
|
||||
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
||||
clientConnection:
|
||||
|
@ -351,7 +352,7 @@ EOF
|
|||
Create the `kube-proxy.service` systemd unit file:
|
||||
|
||||
```
|
||||
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
|
||||
worker-2$ cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
|
||||
[Unit]
|
||||
Description=Kubernetes Kube Proxy
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
|
@ -369,6 +370,8 @@ EOF
|
|||
|
||||
## Step 8 Start the Worker Services
|
||||
|
||||
On worker-2:
|
||||
|
||||
```
|
||||
{
|
||||
sudo systemctl daemon-reload
|
||||
|
@ -381,7 +384,7 @@ EOF
|
|||
|
||||
## Step 9 Approve Server CSR
|
||||
|
||||
`kubectl get csr`
|
||||
`master-1$ kubectl get csr`
|
||||
|
||||
```
|
||||
NAME AGE REQUESTOR CONDITION
|
||||
|
@ -391,7 +394,9 @@ csr-95bv6 20s system:node:worker-
|
|||
|
||||
Approve
|
||||
|
||||
`kubectl certificate approve csr-95bv6`
|
||||
`master-1$ kubectl certificate approve csr-95bv6`
|
||||
|
||||
Note: In the event your cluster persists for longer than 365 days, you will need to manually approve the replacement CSR.
|
||||
|
||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubectl-approval
|
||||
|
||||
|
|
|
@ -29,9 +29,9 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
|||
|
||||
kubectl config use-context kubernetes-the-hard-way
|
||||
}
|
||||
```
|
||||
|
||||
Reference doc for kubectl config [here](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
|
|
|
@ -32,9 +32,9 @@ EOF
|
|||
```
|
||||
Reference: https://v1-12.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole
|
||||
|
||||
The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
|
||||
The Kubernetes API Server authenticates to the Kubelet as the `system:kube-apiserver` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
|
||||
|
||||
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
|
||||
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `system:kube-apiserver` user:
|
||||
|
||||
```
|
||||
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
||||
|
@ -50,9 +50,9 @@ roleRef:
|
|||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: kube-apiserver
|
||||
name: system:kube-apiserver
|
||||
EOF
|
||||
```
|
||||
Reference: https://v1-12.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
|
||||
Reference: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
|
||||
|
||||
Next: [DNS Addon](14-dns-addon.md)
|
||||
|
|
|
@ -3,9 +3,9 @@
|
|||
Install Go
|
||||
|
||||
```
|
||||
wget https://dl.google.com/go/go1.12.1.linux-amd64.tar.gz
|
||||
wget https://dl.google.com/go/go1.15.linux-amd64.tar.gz
|
||||
|
||||
sudo tar -C /usr/local -xzf go1.12.1.linux-amd64.tar.gz
|
||||
sudo tar -C /usr/local -xzf go1.15.linux-amd64.tar.gz
|
||||
export GOPATH="/home/vagrant/go"
|
||||
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
|
||||
```
|
||||
|
@ -13,23 +13,23 @@ export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
|
|||
## Install kubetest
|
||||
|
||||
```
|
||||
go get -v -u k8s.io/test-infra/kubetest
|
||||
git clone https://github.com/kubernetes/test-infra.git
|
||||
cd test-infra/
|
||||
GO111MODULE=on go install ./kubetest
|
||||
```
|
||||
|
||||
> Note: This may take a few minutes depending on your network speed
|
||||
|
||||
## Extract the Version
|
||||
## Use the version specific to your cluster
|
||||
|
||||
```
|
||||
kubetest --extract=v1.13.0
|
||||
K8S_VERSION=$(kubectl version -o json | jq -r '.serverVersion.gitVersion')
|
||||
export KUBERNETES_CONFORMANCE_TEST=y
|
||||
export KUBECONFIG="$HOME/.kube/config"
|
||||
|
||||
cd kubernetes
|
||||
|
||||
export KUBE_MASTER_IP="192.168.5.11:6443"
|
||||
|
||||
export KUBE_MASTER=master-1
|
||||
|
||||
kubetest --test --provider=skeleton --test_args="--ginkgo.focus=\[Conformance\]" | tee test.out
|
||||
kubetest --provider=skeleton --test --test_args=”--ginkgo.focus=\[Conformance\]” --extract ${K8S_VERSION} | tee test.out
|
||||
|
||||
```
|
||||
|
||||
|
|
|
@ -11,9 +11,18 @@ NODE_NAME="worker-1"; NODE_NAME="worker-1"; curl -sSL "https://localhost:6443/ap
|
|||
kubectl -n kube-system create configmap nodes-config --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml
|
||||
```
|
||||
|
||||
Edit node to use the dynamically created configuration
|
||||
Edit `worker-1` node to use the dynamically created configuration
|
||||
```
|
||||
kubectl edit worker-2
|
||||
master-1# kubectl edit node worker-1
|
||||
```
|
||||
|
||||
Add the following YAML bit under `spec`:
|
||||
```
|
||||
configSource:
|
||||
configMap:
|
||||
name: CONFIG_MAP_NAME # replace CONFIG_MAP_NAME with the name of the ConfigMap
|
||||
namespace: kube-system
|
||||
kubeletConfigKey: kubelet
|
||||
```
|
||||
|
||||
Configure Kubelet Service
|
||||
|
@ -45,3 +54,5 @@ RestartSec=5
|
|||
WantedBy=multi-user.target
|
||||
EOF
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 100 KiB |
Binary file not shown.
After Width: | Height: | Size: 75 KiB |
Binary file not shown.
After Width: | Height: | Size: 44 KiB |
|
@ -0,0 +1,29 @@
|
|||
# Verify Certificates in Master-1/2 & Worker-1
|
||||
|
||||
> Note: This script is only intended to work with a kubernetes cluster setup following instructions from this repository. It is not a generic script that works for all kubernetes clusters. Feel free to send in PRs with improvements.
|
||||
|
||||
This script was developed to assist the verification of certificates for each Kubernetes component as part of building the cluster. This script may be executed as soon as you have completed the Lab steps up to [Bootstrapping the Kubernetes Worker Nodes](./09-bootstrapping-kubernetes-workers.md). The script is named as `cert_verify.sh` and it is available at `/home/vagrant` directory of master-1 , master-2 and worker-1 nodes. If it's not already available there copy the script to the nodes from [here](../vagrant/ubuntu/cert_verify.sh).
|
||||
|
||||
It is important that the script execution needs to be done by following commands after logging into the respective virtual machines [ whether it is master-1 / master-2 / worker-1 ] via SSH.
|
||||
|
||||
```bash
|
||||
cd /home/vagrant
|
||||
bash cert_verify.sh
|
||||
```
|
||||
|
||||
Following are the successful output of script execution under different nodes,
|
||||
|
||||
1. VM: Master-1
|
||||
|
||||

|
||||
|
||||
2. VM: Master-2
|
||||
|
||||

|
||||
|
||||
3. VM: Worker-1
|
||||
|
||||

|
||||
|
||||
Any misconfiguration in certificates will be reported in red.
|
||||
|
|
@ -51,6 +51,6 @@ This repository contains answers for the practice tests hosted on the course [Ce
|
|||
# Contributing Guide
|
||||
|
||||
1. The folder structure for all topics and associated practice tests are created already. Use the same pattern to create one if it doesn't exist.
|
||||
2. Create a file with your answers. If you have a different answer than the one that is already there, create a new answer file with yoru name in it.
|
||||
2. Create a file with your answers. If you have a different answer than the one that is already there, create a new answer file with your name in it.
|
||||
4. Do not post the entire question. Only post the question number.
|
||||
3. Send in a pull request
|
|
@ -5,7 +5,7 @@
|
|||
Reference: https://github.com/etcd-io/etcd/releases
|
||||
|
||||
```
|
||||
ETCD_VER=v3.3.13
|
||||
ETCD_VER=v3.4.9
|
||||
|
||||
# choose either URL
|
||||
GOOGLE_URL=https://storage.googleapis.com/etcd
|
||||
|
@ -30,9 +30,15 @@ mv /tmp/etcd-download-test/etcdctl /usr/bin
|
|||
```
|
||||
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
|
||||
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
|
||||
snapshot save /tmp/snapshot-pre-boot.db
|
||||
snapshot save /opt/snapshot-pre-boot.db
|
||||
```
|
||||
|
||||
Note: In this case, the **ETCD** is running on the same server where we are running the commands (which is the *controlplane* node). As a result, the **--endpoint** argument is optional and can be ignored.
|
||||
|
||||
The options **--cert, --cacert and --key** are mandatory to authenticate to the ETCD server to take the backup.
|
||||
|
||||
If you want to take a backup of the ETCD service running on a different machine, you will have to provide the correct endpoint to that server (which is the IP Address and port of the etcd server with the **--endpoint** argument)
|
||||
|
||||
# -----------------------------
|
||||
# Disaster Happens
|
||||
# -----------------------------
|
||||
|
@ -40,51 +46,34 @@ ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kuberne
|
|||
# 3. Restore ETCD Snapshot to a new folder
|
||||
|
||||
```
|
||||
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
|
||||
--name=master \
|
||||
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
|
||||
--data-dir /var/lib/etcd-from-backup \
|
||||
--initial-cluster=master=https://127.0.0.1:2380 \
|
||||
--initial-cluster-token etcd-cluster-1 \
|
||||
--initial-advertise-peer-urls=https://127.0.0.1:2380 \
|
||||
snapshot restore /tmp/snapshot-pre-boot.db
|
||||
ETCDCTL_API=3 etcdctl --data-dir /var/lib/etcd-from-backup \
|
||||
snapshot restore /opt/snapshot-pre-boot.db
|
||||
```
|
||||
|
||||
Note: In this case, we are restoring the snapshot to a different directory but in the same server where we took the backup (**the controlplane node)**
|
||||
As a result, the only required option for the restore command is the **--data-dir**.
|
||||
|
||||
# 4. Modify /etc/kubernetes/manifests/etcd.yaml
|
||||
|
||||
Update ETCD POD to use the new data directory and cluster token by modifying the pod definition file at `/etc/kubernetes/manifests/etcd.yaml`. When this file is updated, the ETCD pod is automatically re-created as thisis a static pod placed under the `/etc/kubernetes/manifests` directory.
|
||||
|
||||
Update --data-dir to use new target location
|
||||
We have now restored the etcd snapshot to a new path on the controlplane - **/var/lib/etcd-from-backup**, so, the only change to be made in the YAML file, is to change the hostPath for the volume called **etcd-data** from old directory (/var/lib/etcd) to the new directory **/var/lib/etcd-from-backup**.
|
||||
|
||||
```
|
||||
--data-dir=/var/lib/etcd-from-backup
|
||||
```
|
||||
|
||||
Update new initial-cluster-token to specify new cluster
|
||||
|
||||
```
|
||||
--initial-cluster-token=etcd-cluster-1
|
||||
```
|
||||
|
||||
Update volumes and volume mounts to point to new path
|
||||
|
||||
```
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/etcd-from-backup
|
||||
name: etcd-data
|
||||
- mountPath: /etc/kubernetes/pki/etcd
|
||||
name: etcd-certs
|
||||
hostNetwork: true
|
||||
priorityClassName: system-cluster-critical
|
||||
volumes:
|
||||
- hostPath:
|
||||
path: /var/lib/etcd-from-backup
|
||||
type: DirectoryOrCreate
|
||||
name: etcd-data
|
||||
- hostPath:
|
||||
path: /etc/kubernetes/pki/etcd
|
||||
type: DirectoryOrCreate
|
||||
name: etcd-certs
|
||||
```
|
||||
With this change, /var/lib/etcd on the **container** points to /var/lib/etcd-from-backup on the **controlplane** (which is what we want)
|
||||
|
||||
> Note: You don't really need to update data directory and volumeMounts.mountPath path above. You could simply just update the hostPath.path in the volumes section to point to the new directory. But if you are not working with a kubeadm deployed cluster, then you might have to update the data directory. That's why I left it as is.
|
||||
|
||||
When this file is updated, the ETCD pod is automatically re-created as this is a static pod placed under the `/etc/kubernetes/manifests` directory.
|
||||
|
||||
|
||||
> Note: as the ETCD pod has changed it will automatically restart, and also kube-controller-manager and kube-scheduler. Wait 1-2 to mins for this pods to restart. You can make a `watch "docker ps | grep etcd"` to see when the ETCD pod is restarted.
|
||||
|
||||
> Note2: If the etcd pod is not getting `Ready 1/1`, then restart it by `kubectl delete pod -n kube-system etcd-controlplane` and wait 1 minute.
|
||||
|
||||
> Note3: This is the simplest way to make sure that ETCD uses the restored data after the ETCD pod is recreated. You **don't** have to change anything else.
|
||||
|
||||
**If** you do change **--data-dir** to **/var/lib/etcd-from-backup** in the YAML file, make sure that the **volumeMounts** for **etcd-data** is updated as well, with the mountPath pointing to /var/lib/etcd-from-backup (**THIS COMPLETE STEP IS OPTIONAL AND NEED NOT BE DONE FOR COMPLETING THE RESTORE**)
|
||||
|
|
Binary file not shown.
|
@ -71,6 +71,7 @@ Vagrant.configure("2") do |config|
|
|||
end
|
||||
|
||||
node.vm.provision "setup-dns", type: "shell", :path => "ubuntu/update-dns.sh"
|
||||
node.vm.provision "file", source: "./ubuntu/cert_verify.sh", destination: "$HOME/"
|
||||
|
||||
end
|
||||
end
|
||||
|
@ -111,8 +112,9 @@ Vagrant.configure("2") do |config|
|
|||
end
|
||||
|
||||
node.vm.provision "setup-dns", type: "shell", :path => "ubuntu/update-dns.sh"
|
||||
node.vm.provision "install-docker", type: "shell", :path => "ubuntu/install-docker.sh"
|
||||
node.vm.provision "install-docker", type: "shell", :path => "ubuntu/install-docker-2.sh"
|
||||
node.vm.provision "allow-bridge-nf-traffic", :type => "shell", :path => "ubuntu/allow-bridge-nf-traffic.sh"
|
||||
node.vm.provision "file", source: "./ubuntu/cert_verify.sh", destination: "$HOME/"
|
||||
|
||||
end
|
||||
end
|
||||
|
|
|
@ -0,0 +1,772 @@
|
|||
#!/bin/bash
|
||||
set -e
|
||||
#set -x
|
||||
|
||||
# Green & Red marking for Success and Failed messages
|
||||
SUCCESS='\033[0;32m'
|
||||
FAILED='\033[0;31m'
|
||||
NC='\033[0m'
|
||||
|
||||
# All Cert Location
|
||||
|
||||
# ca certificate location
|
||||
CACERT=ca.crt
|
||||
CAKEY=ca.key
|
||||
|
||||
# admin certificate location
|
||||
ADMINCERT=admin.crt
|
||||
ADMINKEY=admin.key
|
||||
|
||||
# Kube controller manager certificate location
|
||||
KCMCERT=kube-controller-manager.crt
|
||||
KCMKEY=kube-controller-manager.key
|
||||
|
||||
# Kube proxy certificate location
|
||||
KPCERT=kube-proxy.crt
|
||||
KPKEY=kube-proxy.key
|
||||
|
||||
# Kube scheduler certificate location
|
||||
KSCERT=kube-scheduler.crt
|
||||
KSKEY=kube-scheduler.key
|
||||
|
||||
# Kube api certificate location
|
||||
APICERT=kube-apiserver.crt
|
||||
APIKEY=kube-apiserver.key
|
||||
|
||||
# ETCD certificate location
|
||||
ETCDCERT=etcd-server.crt
|
||||
ETCDKEY=etcd-server.key
|
||||
|
||||
# Service account certificate location
|
||||
SACERT=service-account.crt
|
||||
SAKEY=service-account.key
|
||||
|
||||
# All kubeconfig locations
|
||||
|
||||
# kubeproxy.kubeconfig location
|
||||
KPKUBECONFIG=kube-proxy.kubeconfig
|
||||
|
||||
# kube-controller-manager.kubeconfig location
|
||||
KCMKUBECONFIG=kube-controller-manager.kubeconfig
|
||||
|
||||
# kube-scheduler.kubeconfig location
|
||||
KSKUBECONFIG=kube-scheduler.kubeconfig
|
||||
|
||||
# admin.kubeconfig location
|
||||
ADMINKUBECONFIG=admin.kubeconfig
|
||||
|
||||
# All systemd service locations
|
||||
|
||||
# etcd systemd service
|
||||
SYSTEMD_ETCD_FILE=/etc/systemd/system/etcd.service
|
||||
|
||||
# kub-api systemd service
|
||||
SYSTEMD_API_FILE=/etc/systemd/system/kube-apiserver.service
|
||||
|
||||
# kube-controller-manager systemd service
|
||||
SYSTEMD_KCM_FILE=/etc/systemd/system/kube-controller-manager.service
|
||||
|
||||
# kube-scheduler systemd service
|
||||
SYSTEMD_KS_FILE=/etc/systemd/system/kube-scheduler.service
|
||||
|
||||
### WORKER NODES ###
|
||||
|
||||
# Worker-1 cert details
|
||||
WORKER_1_CERT=/var/lib/kubelet/worker-1.crt
|
||||
WORKER_1_KEY=/var/lib/kubelet/worker-1.key
|
||||
|
||||
# Worker-1 kubeconfig location
|
||||
WORKER_1_KUBECONFIG=/var/lib/kubelet/kubeconfig
|
||||
|
||||
# Worker-1 kubelet config location
|
||||
WORKER_1_KUBELET=/var/lib/kubelet/kubelet-config.yaml
|
||||
|
||||
# Systemd worker-1 kubelet location
|
||||
SYSTEMD_WORKER_1_KUBELET=/etc/systemd/system/kubelet.service
|
||||
|
||||
# kube-proxy worker-1 location
|
||||
WORKER_1_KP_KUBECONFIG=/var/lib/kube-proxy/kubeconfig
|
||||
SYSTEMD_WORKER_1_KP=/etc/systemd/system/kube-proxy.service
|
||||
|
||||
|
||||
# Function - Master node #
|
||||
|
||||
check_cert_ca()
|
||||
{
|
||||
if [ -z $CACERT ] && [ -z $CAKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify cert and key location\n"
|
||||
exit 1
|
||||
elif [ -f $CACERT ] && [ -f $CAKEY ]
|
||||
then
|
||||
printf "${NC}CA cert and key found, verifying the authenticity\n"
|
||||
CACERT_SUBJECT=$(openssl x509 -in $CACERT -text | grep "Subject: CN"| tr -d " ")
|
||||
CACERT_ISSUER=$(openssl x509 -in $CACERT -text | grep "Issuer: CN"| tr -d " ")
|
||||
CACERT_MD5=$(openssl x509 -noout -modulus -in $CACERT | openssl md5| awk '{print $2}')
|
||||
CAKEY_MD5=$(openssl rsa -noout -modulus -in $CAKEY | openssl md5| awk '{print $2}')
|
||||
if [ $CACERT_SUBJECT == "Subject:CN=KUBERNETES-CA" ] && [ $CACERT_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && [ $CACERT_MD5 == $CAKEY_MD5 ]
|
||||
then
|
||||
printf "${SUCCESS}CA cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the CA certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#certificate-authority\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}ca.crt / ca.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#certificate-authority\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
check_cert_admin()
|
||||
{
|
||||
if [ -z $ADMINCERT ] && [ -z $ADMINKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify cert and key location\n"
|
||||
exit 1
|
||||
elif [ -f $ADMINCERT ] && [ -f $ADMINKEY ]
|
||||
then
|
||||
printf "${NC}admin cert and key found, verifying the authenticity\n"
|
||||
ADMINCERT_SUBJECT=$(openssl x509 -in $ADMINCERT -text | grep "Subject: CN"| tr -d " ")
|
||||
ADMINCERT_ISSUER=$(openssl x509 -in $ADMINCERT -text | grep "Issuer: CN"| tr -d " ")
|
||||
ADMINCERT_MD5=$(openssl x509 -noout -modulus -in $ADMINCERT | openssl md5| awk '{print $2}')
|
||||
ADMINKEY_MD5=$(openssl rsa -noout -modulus -in $ADMINKEY | openssl md5| awk '{print $2}')
|
||||
if [ $ADMINCERT_SUBJECT == "Subject:CN=admin,O=system:masters" ] && [ $ADMINCERT_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && [ $ADMINCERT_MD5 == $ADMINKEY_MD5 ]
|
||||
then
|
||||
printf "${SUCCESS}admin cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the admin certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-admin-client-certificate\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}admin.crt / admin.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-admin-client-certificate\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cert_kcm()
|
||||
{
|
||||
if [ -z $KCMCERT ] && [ -z $KCMKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify cert and key location\n"
|
||||
exit 1
|
||||
elif [ -f $KCMCERT ] && [ -f $KCMKEY ]
|
||||
then
|
||||
printf "${NC}kube-controller-manager cert and key found, verifying the authenticity\n"
|
||||
KCMCERT_SUBJECT=$(openssl x509 -in $KCMCERT -text | grep "Subject: CN"| tr -d " ")
|
||||
KCMCERT_ISSUER=$(openssl x509 -in $KCMCERT -text | grep "Issuer: CN"| tr -d " ")
|
||||
KCMCERT_MD5=$(openssl x509 -noout -modulus -in $KCMCERT | openssl md5| awk '{print $2}')
|
||||
KCMKEY_MD5=$(openssl rsa -noout -modulus -in $KCMKEY | openssl md5| awk '{print $2}')
|
||||
if [ $KCMCERT_SUBJECT == "Subject:CN=system:kube-controller-manager" ] && [ $KCMCERT_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && [ $KCMCERT_MD5 == $KCMKEY_MD5 ]
|
||||
then
|
||||
printf "${SUCCESS}kube-controller-manager cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the kube-controller-manager certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-controller-manager-client-certificate\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}kube-controller-manager.crt / kube-controller-manager.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-controller-manager-client-certificate\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cert_kp()
|
||||
{
|
||||
if [ -z $KPCERT ] && [ -z $KPKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify cert and key location\n"
|
||||
exit 1
|
||||
elif [ -f $KPCERT ] && [ -f $KPKEY ]
|
||||
then
|
||||
printf "${NC}kube-proxy cert and key found, verifying the authenticity\n"
|
||||
KPCERT_SUBJECT=$(openssl x509 -in $KPCERT -text | grep "Subject: CN"| tr -d " ")
|
||||
KPCERT_ISSUER=$(openssl x509 -in $KPCERT -text | grep "Issuer: CN"| tr -d " ")
|
||||
KPCERT_MD5=$(openssl x509 -noout -modulus -in $KPCERT | openssl md5| awk '{print $2}')
|
||||
KPKEY_MD5=$(openssl rsa -noout -modulus -in $KPKEY | openssl md5| awk '{print $2}')
|
||||
if [ $KPCERT_SUBJECT == "Subject:CN=system:kube-proxy" ] && [ $KPCERT_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && [ $KPCERT_MD5 == $KPKEY_MD5 ]
|
||||
then
|
||||
printf "${SUCCESS}kube-proxy cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the kube-proxy certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-kube-proxy-client-certificate\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}kube-proxy.crt / kube-proxy.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-kube-proxy-client-certificate\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cert_ks()
|
||||
{
|
||||
if [ -z $KSCERT ] && [ -z $KSKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify cert and key location\n"
|
||||
exit 1
|
||||
elif [ -f $KSCERT ] && [ -f $KSKEY ]
|
||||
then
|
||||
printf "${NC}kube-scheduler cert and key found, verifying the authenticity\n"
|
||||
KSCERT_SUBJECT=$(openssl x509 -in $KSCERT -text | grep "Subject: CN"| tr -d " ")
|
||||
KSCERT_ISSUER=$(openssl x509 -in $KSCERT -text | grep "Issuer: CN"| tr -d " ")
|
||||
KSCERT_MD5=$(openssl x509 -noout -modulus -in $KSCERT | openssl md5| awk '{print $2}')
|
||||
KSKEY_MD5=$(openssl rsa -noout -modulus -in $KSKEY | openssl md5| awk '{print $2}')
|
||||
if [ $KSCERT_SUBJECT == "Subject:CN=system:kube-scheduler" ] && [ $KSCERT_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && [ $KSCERT_MD5 == $KSKEY_MD5 ]
|
||||
then
|
||||
printf "${SUCCESS}kube-scheduler cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the kube-scheduler certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-scheduler-client-certificate\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}kube-scheduler.crt / kube-scheduler.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-scheduler-client-certificate\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cert_api()
|
||||
{
|
||||
if [ -z $APICERT ] && [ -z $APIKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify kube-api cert and key location, Exiting....\n"
|
||||
exit 1
|
||||
elif [ -f $APICERT ] && [ -f $APIKEY ]
|
||||
then
|
||||
printf "${NC}kube-apiserver cert and key found, verifying the authenticity\n"
|
||||
APICERT_SUBJECT=$(openssl x509 -in $APICERT -text | grep "Subject: CN"| tr -d " ")
|
||||
APICERT_ISSUER=$(openssl x509 -in $APICERT -text | grep "Issuer: CN"| tr -d " ")
|
||||
APICERT_MD5=$(openssl x509 -noout -modulus -in $APICERT | openssl md5| awk '{print $2}')
|
||||
APIKEY_MD5=$(openssl rsa -noout -modulus -in $APIKEY | openssl md5| awk '{print $2}')
|
||||
if [ $APICERT_SUBJECT == "Subject:CN=kube-apiserver" ] && [ $APICERT_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && [ $APICERT_MD5 == $APIKEY_MD5 ]
|
||||
then
|
||||
printf "${SUCCESS}kube-apiserver cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the kube-apiserver certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-kubernetes-api-server-certificate\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}kube-apiserver.crt / kube-apiserver.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-kubernetes-api-server-certificate\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cert_etcd()
|
||||
{
|
||||
if [ -z $ETCDCERT ] && [ -z $ETCDKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify ETCD cert and key location, Exiting....\n"
|
||||
exit 1
|
||||
elif [ -f $ETCDCERT ] && [ -f $ETCDKEY ]
|
||||
then
|
||||
printf "${NC}ETCD cert and key found, verifying the authenticity\n"
|
||||
ETCDCERT_SUBJECT=$(openssl x509 -in $ETCDCERT -text | grep "Subject: CN"| tr -d " ")
|
||||
ETCDCERT_ISSUER=$(openssl x509 -in $ETCDCERT -text | grep "Issuer: CN"| tr -d " ")
|
||||
ETCDCERT_MD5=$(openssl x509 -noout -modulus -in $ETCDCERT | openssl md5| awk '{print $2}')
|
||||
ETCDKEY_MD5=$(openssl rsa -noout -modulus -in $ETCDKEY | openssl md5| awk '{print $2}')
|
||||
if [ $ETCDCERT_SUBJECT == "Subject:CN=etcd-server" ] && [ $ETCDCERT_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && [ $ETCDCERT_MD5 == $ETCDKEY_MD5 ]
|
||||
then
|
||||
printf "${SUCCESS}etcd-server.crt / etcd-server.key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the ETCD certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-etcd-server-certificate\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}etcd-server.crt / etcd-server.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-etcd-server-certificate\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cert_sa()
|
||||
{
|
||||
if [ -z $SACERT ] && [ -z $SAKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify Service Account cert and key location, Exiting....\n"
|
||||
exit 1
|
||||
elif [ -f $SACERT ] && [ -f $SAKEY ]
|
||||
then
|
||||
printf "${NC}service account cert and key found, verifying the authenticity\n"
|
||||
SACERT_SUBJECT=$(openssl x509 -in $SACERT -text | grep "Subject: CN"| tr -d " ")
|
||||
SACERT_ISSUER=$(openssl x509 -in $SACERT -text | grep "Issuer: CN"| tr -d " ")
|
||||
SACERT_MD5=$(openssl x509 -noout -modulus -in $SACERT | openssl md5| awk '{print $2}')
|
||||
SAKEY_MD5=$(openssl rsa -noout -modulus -in $SAKEY | openssl md5| awk '{print $2}')
|
||||
if [ $SACERT_SUBJECT == "Subject:CN=service-accounts" ] && [ $SACERT_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && [ $SACERT_MD5 == $SAKEY_MD5 ]
|
||||
then
|
||||
printf "${SUCCESS}Service Account cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the Service Account certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-service-account-key-pair\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}service-account.crt / service-account.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/04-certificate-authority.md#the-service-account-key-pair\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
check_cert_kpkubeconfig()
|
||||
{
|
||||
if [ -z $KPKUBECONFIG ]
|
||||
then
|
||||
printf "${FAILED}please specify kube-proxy kubeconfig location\n"
|
||||
exit 1
|
||||
elif [ -f $KPKUBECONFIG ]
|
||||
then
|
||||
printf "${NC}kube-proxy kubeconfig file found, verifying the authenticity\n"
|
||||
KPKUBECONFIG_SUBJECT=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
|
||||
KPKUBECONFIG_ISSUER=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
|
||||
KPKUBECONFIG_CERT_MD5=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
||||
KPKUBECONFIG_KEY_MD5=$(cat $KPKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
||||
KPKUBECONFIG_SERVER=$(cat $KPKUBECONFIG | grep "server:"| awk '{print $2}')
|
||||
if [ $KPKUBECONFIG_SUBJECT == "Subject:CN=system:kube-proxy" ] && [ $KPKUBECONFIG_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && [ $KPKUBECONFIG_CERT_MD5 == $KPKUBECONFIG_KEY_MD5 ] && [ $KPKUBECONFIG_SERVER == "https://192.168.5.30:6443" ]
|
||||
then
|
||||
printf "${SUCCESS}kube-proxy kubeconfig cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the kube-proxy kubeconfig certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md#the-kube-proxy-kubernetes-configuration-file\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}kube-proxy kubeconfig file is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md#the-kube-proxy-kubernetes-configuration-file\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cert_kcmkubeconfig()
|
||||
{
|
||||
if [ -z $KCMKUBECONFIG ]
|
||||
then
|
||||
printf "${FAILED}please specify kube-controller-manager kubeconfig location\n"
|
||||
exit 1
|
||||
elif [ -f $KCMKUBECONFIG ]
|
||||
then
|
||||
printf "${NC}kube-controller-manager kubeconfig file found, verifying the authenticity\n"
|
||||
KCMKUBECONFIG_SUBJECT=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
|
||||
KCMKUBECONFIG_ISSUER=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
|
||||
KCMKUBECONFIG_CERT_MD5=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
||||
KCMKUBECONFIG_KEY_MD5=$(cat $KCMKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
||||
KCMKUBECONFIG_SERVER=$(cat $KCMKUBECONFIG | grep "server:"| awk '{print $2}')
|
||||
if [ $KCMKUBECONFIG_SUBJECT == "Subject:CN=system:kube-controller-manager" ] && [ $KCMKUBECONFIG_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && [ $KCMKUBECONFIG_CERT_MD5 == $KCMKUBECONFIG_KEY_MD5 ] && [ $KCMKUBECONFIG_SERVER == "https://127.0.0.1:6443" ]
|
||||
then
|
||||
printf "${SUCCESS}kube-controller-manager kubeconfig cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the kube-controller-manager kubeconfig certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md#the-kube-controller-manager-kubernetes-configuration-file\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}kube-controller-manager kubeconfig file is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md#the-kube-controller-manager-kubernetes-configuration-file\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
check_cert_kskubeconfig()
|
||||
{
|
||||
if [ -z $KSKUBECONFIG ]
|
||||
then
|
||||
printf "${FAILED}please specify kube-scheduler kubeconfig location\n"
|
||||
exit 1
|
||||
elif [ -f $KSKUBECONFIG ]
|
||||
then
|
||||
printf "${NC}kube-scheduler kubeconfig file found, verifying the authenticity\n"
|
||||
KSKUBECONFIG_SUBJECT=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
|
||||
KSKUBECONFIG_ISSUER=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
|
||||
KSKUBECONFIG_CERT_MD5=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
||||
KSKUBECONFIG_KEY_MD5=$(cat $KSKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
||||
KSKUBECONFIG_SERVER=$(cat $KSKUBECONFIG | grep "server:"| awk '{print $2}')
|
||||
if [ $KSKUBECONFIG_SUBJECT == "Subject:CN=system:kube-scheduler" ] && [ $KSKUBECONFIG_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && [ $KSKUBECONFIG_CERT_MD5 == $KSKUBECONFIG_KEY_MD5 ] && [ $KSKUBECONFIG_SERVER == "https://127.0.0.1:6443" ]
|
||||
then
|
||||
printf "${SUCCESS}kube-scheduler kubeconfig cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the kube-scheduler kubeconfig certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md#the-kube-scheduler-kubernetes-configuration-file\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}kube-scheduler kubeconfig file is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md#the-kube-scheduler-kubernetes-configuration-file\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cert_adminkubeconfig()
|
||||
{
|
||||
if [ -z $ADMINKUBECONFIG ]
|
||||
then
|
||||
printf "${FAILED}please specify admin kubeconfig location\n"
|
||||
exit 1
|
||||
elif [ -f $ADMINKUBECONFIG ]
|
||||
then
|
||||
printf "${NC}admin kubeconfig file found, verifying the authenticity\n"
|
||||
ADMINKUBECONFIG_SUBJECT=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
|
||||
ADMINKUBECONFIG_ISSUER=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
|
||||
ADMINKUBECONFIG_CERT_MD5=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
||||
ADMINKUBECONFIG_KEY_MD5=$(cat $ADMINKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
||||
ADMINKUBECONFIG_SERVER=$(cat $ADMINKUBECONFIG | grep "server:"| awk '{print $2}')
|
||||
if [ $ADMINKUBECONFIG_SUBJECT == "Subject:CN=admin,O=system:masters" ] && [ $ADMINKUBECONFIG_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && [ $ADMINKUBECONFIG_CERT_MD5 == $ADMINKUBECONFIG_KEY_MD5 ] && [ $ADMINKUBECONFIG_SERVER == "https://127.0.0.1:6443" ]
|
||||
then
|
||||
printf "${SUCCESS}admin kubeconfig cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the admin kubeconfig certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md#the-admin-kubernetes-configuration-file\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}admin kubeconfig file is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md#the-admin-kubernetes-configuration-file\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_systemd_etcd()
|
||||
{
|
||||
if [ -z $ETCDCERT ] && [ -z $ETCDKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify ETCD cert and key location, Exiting....\n"
|
||||
exit 1
|
||||
elif [ -f $SYSTEMD_ETCD_FILE ]
|
||||
then
|
||||
printf "${NC}Systemd for ETCD service found, verifying the authenticity\n"
|
||||
|
||||
# Systemd cert and key file details
|
||||
ETCD_CA_CERT=ca.crt
|
||||
CERT_FILE=$(systemctl cat etcd.service | grep "\--cert-file"| awk '{print $1}'| cut -d "=" -f2)
|
||||
KEY_FILE=$(systemctl cat etcd.service | grep "\--key-file"| awk '{print $1}' | cut -d "=" -f2)
|
||||
PEER_CERT_FILE=$(systemctl cat etcd.service | grep "\--peer-cert-file"| awk '{print $1}'| cut -d "=" -f2)
|
||||
PEER_KEY_FILE=$(systemctl cat etcd.service | grep "\--peer-key-file"| awk '{print $1}'| cut -d "=" -f2)
|
||||
TRUSTED_CA_FILE=$(systemctl cat etcd.service | grep "\--trusted-ca-file"| awk '{print $1}'| cut -d "=" -f2)
|
||||
PEER_TRUSTED_CA_FILE=$(systemctl cat etcd.service | grep "\--peer-trusted-ca-file"| awk '{print $1}'| cut -d "=" -f2)
|
||||
|
||||
# Systemd advertise , client and peer url's
|
||||
INTERNAL_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)
|
||||
IAP_URL=$(systemctl cat etcd.service | grep "\--initial-advertise-peer-urls"| awk '{print $2}')
|
||||
LP_URL=$(systemctl cat etcd.service | grep "\--listen-peer-urls"| awk '{print $2}')
|
||||
LC_URL=$(systemctl cat etcd.service | grep "\--listen-client-urls"| awk '{print $2}')
|
||||
AC_URL=$(systemctl cat etcd.service | grep "\--advertise-client-urls"| awk '{print $2}')
|
||||
|
||||
|
||||
ETCD_CA_CERT=/etc/etcd/ca.crt
|
||||
ETCDCERT=/etc/etcd/etcd-server.crt
|
||||
ETCDKEY=/etc/etcd/etcd-server.key
|
||||
if [ $CERT_FILE == $ETCDCERT ] && [ $KEY_FILE == $ETCDKEY ] && [ $PEER_CERT_FILE == $ETCDCERT ] && [ $PEER_KEY_FILE == $ETCDKEY ] && \
|
||||
[ $TRUSTED_CA_FILE == $ETCD_CA_CERT ] && [ $PEER_TRUSTED_CA_FILE = $ETCD_CA_CERT ]
|
||||
then
|
||||
printf "${SUCCESS}ETCD certificate, ca and key files are correct under systemd service\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the ETCD certificate, ca and keys. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md#configure-the-etcd-server\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ $IAP_URL == "https://$INTERNAL_IP:2380" ] && [ $LP_URL == "https://$INTERNAL_IP:2380" ] && [ $LC_URL == "https://$INTERNAL_IP:2379,https://127.0.0.1:2379" ] && \
|
||||
[ $AC_URL == "https://$INTERNAL_IP:2379" ]
|
||||
then
|
||||
printf "${SUCCESS}ETCD initial-advertise-peer-urls, listen-peer-urls, listen-client-urls, advertise-client-urls are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the ETCD initial-advertise-peer-urls / listen-peer-urls / listen-client-urls / advertise-client-urls. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md#configure-the-etcd-server\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
else
|
||||
printf "${FAILED}etcd-server.crt / etcd-server.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/07-bootstrapping-etcd.md#configure-the-etcd-server\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_systemd_api()
|
||||
{
|
||||
if [ -z $APICERT ] && [ -z $APIKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify kube-api cert and key location, Exiting....\n"
|
||||
exit 1
|
||||
elif [ -f $SYSTEMD_API_FILE ]
|
||||
then
|
||||
printf "${NC}Systemd for kube-api service found, verifying the authenticity\n"
|
||||
|
||||
INTERNAL_IP=$(ip addr show enp0s8 | grep "inet " | awk '{print $2}' | cut -d / -f 1)
|
||||
ADVERTISE_ADDRESS=$(systemctl cat kube-apiserver.service | grep "\--advertise-address" | awk '{print $1}' | cut -d "=" -f2)
|
||||
CLIENT_CA_FILE=$(systemctl cat kube-apiserver.service | grep "\--client-ca-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
ETCD_CA_FILE=$(systemctl cat kube-apiserver.service | grep "\--etcd-cafile" | awk '{print $1}' | cut -d "=" -f2)
|
||||
ETCD_CERT_FILE=$(systemctl cat kube-apiserver.service | grep "\--etcd-certfile" | awk '{print $1}' | cut -d "=" -f2)
|
||||
ETCD_KEY_FILE=$(systemctl cat kube-apiserver.service | grep "\--etcd-keyfile" | awk '{print $1}' | cut -d "=" -f2)
|
||||
KUBELET_CERTIFICATE_AUTHORITY=$(systemctl cat kube-apiserver.service | grep "\--kubelet-certificate-authority" | awk '{print $1}' | cut -d "=" -f2)
|
||||
KUBELET_CLIENT_CERTIFICATE=$(systemctl cat kube-apiserver.service | grep "\--kubelet-client-certificate" | awk '{print $1}' | cut -d "=" -f2)
|
||||
KUBELET_CLIENT_KEY=$(systemctl cat kube-apiserver.service | grep "\--kubelet-client-key" | awk '{print $1}' | cut -d "=" -f2)
|
||||
SERVICE_ACCOUNT_KEY_FILE=$(systemctl cat kube-apiserver.service | grep "\--service-account-key-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
TLS_CERT_FILE=$(systemctl cat kube-apiserver.service | grep "\--tls-cert-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
TLS_PRIVATE_KEY_FILE=$(systemctl cat kube-apiserver.service | grep "\--tls-private-key-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
|
||||
CACERT=/var/lib/kubernetes/ca.crt
|
||||
APICERT=/var/lib/kubernetes/kube-apiserver.crt
|
||||
APIKEY=/var/lib/kubernetes/kube-apiserver.key
|
||||
SACERT=/var/lib/kubernetes/service-account.crt
|
||||
if [ $ADVERTISE_ADDRESS == $INTERNAL_IP ] && [ $CLIENT_CA_FILE == $CACERT ] && [ $ETCD_CA_FILE == $CACERT ] && \
|
||||
[ $ETCD_CERT_FILE == "/var/lib/kubernetes/etcd-server.crt" ] && [ $ETCD_KEY_FILE == "/var/lib/kubernetes/etcd-server.key" ] && \
|
||||
[ $KUBELET_CERTIFICATE_AUTHORITY == $CACERT ] && [ $KUBELET_CLIENT_CERTIFICATE == $APICERT ] && [ $KUBELET_CLIENT_KEY == $APIKEY ] && \
|
||||
[ $SERVICE_ACCOUNT_KEY_FILE == $SACERT ] && [ $TLS_CERT_FILE == $APICERT ] && [ $TLS_PRIVATE_KEY_FILE == $APIKEY ]
|
||||
then
|
||||
printf "${SUCCESS}kube-apiserver advertise-address/ client-ca-file/ etcd-cafile/ etcd-certfile/ etcd-keyfile/ kubelet-certificate-authority/ kubelet-client-certificate/ kubelet-client-key/ service-account-key-file/ tls-cert-file/ tls-private-key-file are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the kube-apiserver systemd file, check advertise-address/ client-ca-file/ etcd-cafile/ etcd-certfile/ etcd-keyfile/ kubelet-certificate-authority/ kubelet-client-certificate/ kubelet-client-key/ service-account-key-file/ tls-cert-file/ tls-private-key-file. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}kube-apiserver.crt / kube-apiserver.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_systemd_kcm()
|
||||
{
|
||||
KCMCERT=/var/lib/kubernetes/kube-controller-manager.crt
|
||||
KCMKEY=/var/lib/kubernetes/kube-controller-manager.key
|
||||
CACERT=/var/lib/kubernetes/ca.crt
|
||||
CAKEY=/var/lib/kubernetes/ca.key
|
||||
SAKEY=/var/lib/kubernetes/service-account.key
|
||||
KCMKUBECONFIG=/var/lib/kubernetes/kube-controller-manager.kubeconfig
|
||||
if [ -z $KCMCERT ] && [ -z $KCMKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify cert and key location\n"
|
||||
exit 1
|
||||
elif [ -f $SYSTEMD_KCM_FILE ]
|
||||
then
|
||||
printf "${NC}Systemd for kube-controller-manager service found, verifying the authenticity\n"
|
||||
CLUSTER_SIGNING_CERT_FILE=$(systemctl cat kube-controller-manager.service | grep "\--cluster-signing-cert-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
CLUSTER_SIGNING_KEY_FILE=$(systemctl cat kube-controller-manager.service | grep "\--cluster-signing-key-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
KUBECONFIG=$(systemctl cat kube-controller-manager.service | grep "\--kubeconfig" | awk '{print $1}' | cut -d "=" -f2)
|
||||
ROOT_CA_FILE=$(systemctl cat kube-controller-manager.service | grep "\--root-ca-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
SERVICE_ACCOUNT_PRIVATE_KEY_FILE=$(systemctl cat kube-controller-manager.service | grep "\--service-account-private-key-file" | awk '{print $1}' | cut -d "=" -f2)
|
||||
|
||||
if [ $CLUSTER_SIGNING_CERT_FILE == $CACERT ] && [ $CLUSTER_SIGNING_KEY_FILE == $CAKEY ] && [ $KUBECONFIG == $KCMKUBECONFIG ] && \
|
||||
[ $ROOT_CA_FILE == $CACERT ] && [ $SERVICE_ACCOUNT_PRIVATE_KEY_FILE == $SAKEY ]
|
||||
then
|
||||
printf "${SUCCESS}kube-controller-manager cluster-signing-cert-file, cluster-signing-key-file, kubeconfig, root-ca-file, service-account-private-key-file are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the kube-controller-manager cluster-signing-cert-file, cluster-signing-key-file, kubeconfig, root-ca-file, service-account-private-key-file ,More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-controller-manager\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}kube-controller-manager.crt / kube-controller-manager.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-controller-manager\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_systemd_ks()
|
||||
{
|
||||
KSCERT=/var/lib/kubernetes/kube-scheduler.crt
|
||||
KSKEY=/var/lib/kubernetes/kube-scheduler.key
|
||||
KSKUBECONFIG=/var/lib/kubernetes/kube-scheduler.kubeconfig
|
||||
|
||||
if [ -z $KSCERT ] && [ -z $KSKEY ]
|
||||
then
|
||||
printf "${FAILED}please specify cert and key location\n"
|
||||
exit 1
|
||||
elif [ -f $SYSTEMD_KS_FILE ]
|
||||
then
|
||||
printf "${NC}Systemd for kube-scheduler service found, verifying the authenticity\n"
|
||||
|
||||
KUBECONFIG=$(systemctl cat kube-scheduler.service | grep "\--kubeconfig"| awk '{print $1}'| cut -d "=" -f2)
|
||||
ADDRESS=$(systemctl cat kube-scheduler.service | grep "\--address"| awk '{print $1}'| cut -d "=" -f2)
|
||||
|
||||
if [ $KUBECONFIG == $KSKUBECONFIG ] && [ $ADDRESS == "127.0.0.1" ]
|
||||
then
|
||||
printf "${SUCCESS}kube-scheduler --kubeconfig, --address are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the kube-scheduler --kubeconfig, --address, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-scheduler\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}kube-scheduler.crt / kube-scheduler.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-scheduler\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# END OF Function - Master node #
|
||||
|
||||
# Function - Worker-1 node #
|
||||
|
||||
check_cert_worker_1()
|
||||
{
|
||||
if [ -z $WORKER_1_CERT ] && [ -z $WORKER_1_KEY ]
|
||||
then
|
||||
printf "${FAILED}please specify cert and key location of worker-1 node\n"
|
||||
exit 1
|
||||
elif [ -f $WORKER_1_CERT ] && [ -f $WORKER_1_KEY ]
|
||||
then
|
||||
printf "${NC}worker-1 cert and key found, verifying the authenticity\n"
|
||||
WORKER_1_CERT_SUBJECT=$(openssl x509 -in $WORKER_1_CERT -text | grep "Subject: CN"| tr -d " ")
|
||||
WORKER_1_CERT_ISSUER=$(openssl x509 -in $WORKER_1_CERT -text | grep "Issuer: CN"| tr -d " ")
|
||||
WORKER_1_CERT_MD5=$(openssl x509 -noout -modulus -in $WORKER_1_CERT | openssl md5| awk '{print $2}')
|
||||
WORKER_1_KEY_MD5=$(openssl rsa -noout -modulus -in $WORKER_1_KEY | openssl md5| awk '{print $2}')
|
||||
if [ $WORKER_1_CERT_SUBJECT == "Subject:CN=system:node:worker-1,O=system:nodes" ] && [ $WORKER_1_CERT_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && [ $WORKER_1_CERT_MD5 == $WORKER_1_KEY_MD5 ]
|
||||
then
|
||||
printf "${SUCCESS}worker-1 cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the worker-1 certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md#provisioning--kubelet-client-certificates\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}/var/lib/kubelet/worker-1.crt / /var/lib/kubelet/worker-1.key is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md#provisioning--kubelet-client-certificates\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cert_worker_1_kubeconfig()
|
||||
{
|
||||
if [ -z $WORKER_1_KUBECONFIG ]
|
||||
then
|
||||
printf "${FAILED}please specify worker-1 kubeconfig location\n"
|
||||
exit 1
|
||||
elif [ -f $WORKER_1_KUBECONFIG ]
|
||||
then
|
||||
printf "${NC}worker-1 kubeconfig file found, verifying the authenticity\n"
|
||||
WORKER_1_KUBECONFIG_SUBJECT=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
|
||||
WORKER_1_KUBECONFIG_ISSUER=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
|
||||
WORKER_1_KUBECONFIG_CERT_MD5=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
||||
WORKER_1_KUBECONFIG_KEY_MD5=$(cat $WORKER_1_KUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
||||
WORKER_1_KUBECONFIG_SERVER=$(cat $WORKER_1_KUBECONFIG | grep "server:"| awk '{print $2}')
|
||||
if [ $WORKER_1_KUBECONFIG_SUBJECT == "Subject:CN=system:node:worker-1,O=system:nodes" ] && [ $WORKER_1_KUBECONFIG_ISSUER == "Issuer:CN=KUBERNETES-CA" ] && \
|
||||
[ $WORKER_1_KUBECONFIG_CERT_MD5 == $WORKER_1_KUBECONFIG_KEY_MD5 ] && [ $WORKER_1_KUBECONFIG_SERVER == "https://192.168.5.30:6443" ]
|
||||
then
|
||||
printf "${SUCCESS}worker-1 kubeconfig cert and key are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the worker-1 kubeconfig certificate and keys, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md#the-kubelet-kubernetes-configuration-file\n"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
printf "${FAILED}worker-1 /var/lib/kubelet/kubeconfig file is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md#the-kubelet-kubernetes-configuration-file\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cert_worker_1_kubelet()
|
||||
{
|
||||
|
||||
CACERT=/var/lib/kubernetes/ca.crt
|
||||
WORKER_1_TLSCERTFILE=/var/lib/kubelet/${HOSTNAME}.crt
|
||||
WORKER_1_TLSPRIVATEKEY=/var/lib/kubelet/${HOSTNAME}.key
|
||||
|
||||
if [ -z $WORKER_1_KUBELET ] && [ -z $SYSTEMD_WORKER_1_KUBELET ]
|
||||
then
|
||||
printf "${FAILED}please specify worker-1 kubelet config location\n"
|
||||
exit 1
|
||||
elif [ -f $WORKER_1_KUBELET ] && [ -f $SYSTEMD_WORKER_1_KUBELET ] && [ -f $WORKER_1_TLSCERTFILE ] && [ -f $WORKER_1_TLSPRIVATEKEY ]
|
||||
then
|
||||
printf "${NC}worker-1 kubelet config file, systemd services, tls cert and key found, verifying the authenticity\n"
|
||||
|
||||
WORKER_1_KUBELET_CA=$(cat $WORKER_1_KUBELET | grep "clientCAFile:" | awk '{print $2}' | tr -d " \"")
|
||||
WORKER_1_KUBELET_DNS=$(cat $WORKER_1_KUBELET | grep "resolvConf:" | awk '{print $2}' | tr -d " \"")
|
||||
WORKER_1_KUBELET_AUTH_MODE=$(cat $WORKER_1_KUBELET | grep "mode:" | awk '{print $2}' | tr -d " \"")
|
||||
|
||||
if [ $WORKER_1_KUBELET_CA == $CACERT ] && [ $WORKER_1_KUBELET_DNS == "/run/systemd/resolve/resolv.conf" ] && \
|
||||
[ $WORKER_1_KUBELET_AUTH_MODE == "Webhook" ]
|
||||
then
|
||||
printf "${SUCCESS}worker-1 kubelet config CA cert, resolvConf and Auth mode are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the worker-1 kubelet config CA cert, resolvConf and Auth mode, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md#configure-the-kubelet\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
KUBELETCONFIG=$(systemctl cat kubelet.service | grep "\--config" | awk '{print $1}'| cut -d "=" -f2)
|
||||
TLSCERTFILE=$(systemctl cat kubelet.service | grep "\--tls-cert-file" | awk '{print $1}'| cut -d "=" -f2)
|
||||
TLSPRIVATEKEY=$(systemctl cat kubelet.service | grep "\--tls-private-key-file" | awk '{print $1}'| cut -d "=" -f2)
|
||||
|
||||
if [ $KUBELETCONFIG == $WORKER_1_KUBELET ] && [ $TLSCERTFILE == $WORKER_1_TLSCERTFILE ] && \
|
||||
[ $TLSPRIVATEKEY == $WORKER_1_TLSPRIVATEKEY ]
|
||||
then
|
||||
printf "${SUCCESS}worker-1 kubelet systemd services are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the worker-1 kubelet systemd services, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md#configure-the-kubelet\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
else
|
||||
printf "${FAILED}worker-1 kubelet config, systemd services, tls cert and key file is missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
check_cert_worker_1_kp()
|
||||
{
|
||||
|
||||
WORKER_1_KP_CONFIG_YAML=/var/lib/kube-proxy/kube-proxy-config.yaml
|
||||
|
||||
if [ -z $WORKER_1_KP_KUBECONFIG ] && [ -z $SYSTEMD_WORKER_1_KP ]
|
||||
then
|
||||
printf "${FAILED}please specify worker-1 kube-proxy config and systemd service path\n"
|
||||
exit 1
|
||||
elif [ -f $WORKER_1_KP_KUBECONFIG ] && [ -f $SYSTEMD_WORKER_1_KP ] && [ -f $WORKER_1_KP_CONFIG_YAML ]
|
||||
then
|
||||
printf "${NC}worker-1 kube-proxy kubeconfig, systemd services and configuration files found, verifying the authenticity\n"
|
||||
|
||||
KP_CONFIG=$(cat $WORKER_1_KP_CONFIG_YAML | grep "kubeconfig:" | awk '{print $2}' | tr -d " \"")
|
||||
KP_CONFIG_YAML=$(systemctl cat kube-proxy.service | grep "\--config" | awk '{print $1}'| cut -d "=" -f2)
|
||||
|
||||
if [ $KP_CONFIG == $WORKER_1_KP_KUBECONFIG ] && [ $KP_CONFIG_YAML == $WORKER_1_KP_CONFIG_YAML ]
|
||||
then
|
||||
printf "${SUCCESS}worker-1 kube-proxy kubeconfig and configuration files are correct\n"
|
||||
else
|
||||
printf "${FAILED}Exiting...Found mismtach in the worker-1 kube-proxy kubeconfig and configuration files, More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md#configure-the-kubernetes-proxy\n"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
else
|
||||
printf "${FAILED}worker-1 kube-proxy kubeconfig and configuration files are missing. More details: https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/09-bootstrapping-kubernetes-workers.md#configure-the-kubernetes-proxy\n"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# END OF Function - Worker-1 node #
|
||||
|
||||
echo -e "This script will validate the certificates in master as well as worker-1 nodes. Before proceeding, make sure you ssh into the respective node [ Master or Worker-1 ] for certificate validation\n"
|
||||
echo -e "1. Verify certification in Master Node\n"
|
||||
echo -e "2. Verify certification in Worker-1 Node\n"
|
||||
echo -e "Please select either the option 1 or 2\n"
|
||||
read value
|
||||
|
||||
case $value in
|
||||
|
||||
1)
|
||||
echo -e "The selected option is $value, proceeding the certificate verification of Master node"
|
||||
|
||||
### MASTER NODES ###
|
||||
master_hostname=$(hostname -s)
|
||||
# CRT & KEY verification
|
||||
check_cert_ca
|
||||
|
||||
if [ $master_hostname == "master-1" ]
|
||||
then
|
||||
check_cert_admin
|
||||
check_cert_kcm
|
||||
check_cert_kp
|
||||
check_cert_ks
|
||||
check_cert_adminkubeconfig
|
||||
check_cert_kpkubeconfig
|
||||
fi
|
||||
check_cert_api
|
||||
check_cert_sa
|
||||
check_cert_etcd
|
||||
|
||||
# Kubeconfig verification
|
||||
check_cert_kcmkubeconfig
|
||||
check_cert_kskubeconfig
|
||||
|
||||
# Systemd verification
|
||||
check_systemd_etcd
|
||||
check_systemd_api
|
||||
check_systemd_kcm
|
||||
check_systemd_ks
|
||||
|
||||
### END OF MASTER NODES ###
|
||||
|
||||
;;
|
||||
|
||||
2)
|
||||
echo -e "The selected option is $value, proceeding the certificate verification of Worker-1 node"
|
||||
|
||||
### WORKER-1 NODE ###
|
||||
|
||||
check_cert_worker_1
|
||||
check_cert_worker_1_kubeconfig
|
||||
check_cert_worker_1_kubelet
|
||||
check_cert_worker_1_kp
|
||||
|
||||
### END OF WORKER-1 NODE ###
|
||||
;;
|
||||
|
||||
*)
|
||||
printf "${FAILED}Exiting.... Please select the valid option either 1 or 2\n"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
|
@ -0,0 +1,3 @@
|
|||
cd /tmp
|
||||
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
sh /tmp/get-docker.sh
|
Loading…
Reference in New Issue