Merge branch 'master' into patch-1

pull/634/head
Mohamed Ayman 2021-04-18 23:42:14 +02:00 committed by GitHub
commit 8d3e96120c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
16 changed files with 120 additions and 107 deletions

View File

@ -1,11 +1,5 @@
> This tutorial is a modified version of the original developed by [Kelsey Hightower](https://github.com/kelseyhightower/kubernetes-the-hard-way).
This repository holds the supporting material for the [Certified Kubernetes Administrators Course](https://kodekloud.com/p/certified-kubernetes-administrator-with-practice-tests). There are two major sections.
- [Kubernetes The Hard Way on VirtualBox](#kubernetes-the-hard-way-on-virtualbox)
- [Answers to Practice Tests hosted on KodeKloud](/practice-questions-answers)
# Kubernetes The Hard Way On VirtualBox
This tutorial walks you through setting up Kubernetes the hard way on a local machine using VirtualBox.

View File

@ -62,7 +62,7 @@ data:
loadbalance
}
---
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns

View File

@ -2,7 +2,7 @@
## VM Hardware Requirements
8 GB of RAM (Preferebly 16 GB)
8 GB of RAM (Preferably 16 GB)
50 GB Disk space
## Virtual Box

View File

@ -73,7 +73,7 @@ Vagrant generates a private key for each of these VMs. It is placed under the .v
## Troubleshooting Tips
If any of the VMs failed to provision, or is not configured correct, delete the vm using the command:
1. If any of the VMs failed to provision, or is not configured correct, delete the vm using the command:
`vagrant destroy <vm>`
@ -97,3 +97,11 @@ In such cases delete the VM, then delete the VM folder and then re-provision
`rmdir "<path-to-vm-folder>\kubernetes-ha-worker-2"`
`vagrant up`
2. When you try "sysctl net.bridge.bridge-nf-call-iptables=1", it would sometimes return "sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory" error. The below would resolve the issue.
`modprobe br_netfilter`
`sysctl -p /etc/sysctl.conf`
`net.bridge.bridge-nf-call-iptables=1`

View File

@ -45,10 +45,9 @@ Results:
```
kube-proxy.kubeconfig
```
Reference docs for kube-proxy [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/)
```
### The kube-controller-manager Kubernetes Configuration File

View File

@ -39,6 +39,15 @@ for instance in master-1 master-2; do
scp encryption-config.yaml ${instance}:~/
done
```
Move `encryption-config.yaml` encryption config file to appropriate directory.
```
for instance in master-1 master-2; do
ssh ${instance} sudo mv encryption-config.yaml /var/lib/kubernetes/
done
```
Reference: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)

View File

@ -8,7 +8,7 @@ The commands in this lab must be run on each controller instance: `master-1`, an
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time.
## Bootstrapping an etcd Cluster Member

View File

@ -78,7 +78,7 @@ Documentation=https://github.com/kubernetes/kubernetes
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--apiserver-count=2 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
@ -99,7 +99,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
--kubelet-https=true \\
--runtime-config=api/all \\
--runtime-config=api/all=true \\
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
--service-cluster-ip-range=10.96.0.0/24 \\
--service-node-port-range=30000-32767 \\

View File

@ -14,7 +14,11 @@ This is not a practical approach when you have 1000s of nodes in the cluster, an
- The Nodes can retrieve the signed certificate from the Kubernetes CA
- The Nodes can generate a kube-config file using this certificate by themselves
- The Nodes can start and join the cluster by themselves
- The Nodes can renew certificates when they expire by themselves
- The Nodes can request new certificates via a CSR, but the CSR must be manually approved by a cluster administrator
In Kubernetes 1.11 a patch was merged to require administrator or Controller approval of node serving CSRs for security reasons.
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#certificate-rotation
So let's get started!
@ -39,16 +43,13 @@ So let's get started!
Copy the ca certificate to the worker node:
```
scp /var/lib/kubernetes/ca.crt worker-2:~/
```
## Step 1 Configure the Binaries on the Worker node
### Download and Install Worker Binaries
```
wget -q --show-progress --https-only --timestamping \
worker-2$ wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubelet
@ -59,7 +60,7 @@ Reference: https://kubernetes.io/docs/setup/release/#node-binaries
Create the installation directories:
```
sudo mkdir -p \
worker-2$ sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
@ -78,7 +79,7 @@ Install the worker binaries:
```
### Move the ca certificate
`sudo mv ca.crt /var/lib/kubernetes/`
`worker-2$ sudo mv ca.crt /var/lib/kubernetes/`
# Step 1 Create the Boostrap Token to be used by Nodes(Kubelets) to invoke Certificate API
@ -86,10 +87,10 @@ For the workers(kubelet) to access the Certificates API, they need to authentica
Bootstrap Tokens take the form of a 6 character token id followed by 16 character token secret separated by a dot. Eg: abcdef.0123456789abcdef. More formally, they must match the regular expression [a-z0-9]{6}\.[a-z0-9]{16}
Bootstrap Tokens are created as a secret in the kube-system namespace.
```
cat > bootstrap-token-07401b.yaml <<EOF
master-1$ cat > bootstrap-token-07401b.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
@ -119,7 +120,7 @@ stringData:
EOF
kubectl create -f bootstrap-token-07401b.yaml
master-1$ kubectl create -f bootstrap-token-07401b.yaml
```
@ -136,11 +137,11 @@ Reference: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tok
Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet
```
kubectl create clusterrolebinding create-csrs-for-bootstrapping --clusterrole=system:node-bootstrapper --group=system:bootstrappers
master-1$ kubectl create clusterrolebinding create-csrs-for-bootstrapping --clusterrole=system:node-bootstrapper --group=system:bootstrappers
--------------- OR ---------------
cat > csrs-for-bootstrapping.yaml <<EOF
master-1$ cat > csrs-for-bootstrapping.yaml <<EOF
# enable bootstrapping nodes to create CSR
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
@ -157,18 +158,18 @@ roleRef:
EOF
kubectl create -f csrs-for-bootstrapping.yaml
master-1$ kubectl create -f csrs-for-bootstrapping.yaml
```
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#authorize-kubelet-to-create-csr
## Step 3 Authorize workers(kubelets) to approve CSR
```
kubectl create clusterrolebinding auto-approve-csrs-for-group --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers
master-1$ kubectl create clusterrolebinding auto-approve-csrs-for-group --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers
--------------- OR ---------------
cat > auto-approve-csrs-for-group.yaml <<EOF
master-1$ cat > auto-approve-csrs-for-group.yaml <<EOF
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
@ -185,7 +186,7 @@ roleRef:
EOF
kubectl create -f auto-approve-csrs-for-group.yaml
master-1$ kubectl create -f auto-approve-csrs-for-group.yaml
```
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval
@ -195,11 +196,11 @@ Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kub
We now create the Cluster Role Binding required for the nodes to automatically renew the certificates on expiry. Note that we are NOT using the **system:bootstrappers** group here any more. Since by the renewal period, we believe the node would be bootstrapped and part of the cluster already. All nodes are part of the **system:nodes** group.
```
kubectl create clusterrolebinding auto-approve-renewals-for-nodes --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
master-1$ kubectl create clusterrolebinding auto-approve-renewals-for-nodes --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
--------------- OR ---------------
cat > auto-approve-renewals-for-nodes.yaml <<EOF
master-1$ cat > auto-approve-renewals-for-nodes.yaml <<EOF
# Approve renewal CSRs for the group "system:nodes"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
@ -216,7 +217,7 @@ roleRef:
EOF
kubectl create -f auto-approve-renewals-for-nodes.yaml
master-1$ kubectl create -f auto-approve-renewals-for-nodes.yaml
```
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval
@ -231,7 +232,7 @@ Here, we don't have the certificates yet. So we cannot create a kubeconfig file.
This is to be done on the `worker-2` node.
```
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://192.168.5.30:6443' --certificate-authority=/var/lib/kubernetes/ca.crt
worker-2$ sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://192.168.5.30:6443' --certificate-authority=/var/lib/kubernetes/ca.crt
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-context bootstrap
@ -240,7 +241,7 @@ sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-conte
Or
```
cat <<EOF | sudo tee /var/lib/kubelet/bootstrap-kubeconfig
worker-2$ cat <<EOF | sudo tee /var/lib/kubelet/bootstrap-kubeconfig
apiVersion: v1
clusters:
- cluster:
@ -269,7 +270,7 @@ Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kub
Create the `kubelet-config.yaml` configuration file:
```
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
worker-2$ cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
@ -296,7 +297,7 @@ EOF
Create the `kubelet.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
worker-2$ cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
@ -311,7 +312,6 @@ ExecStart=/usr/local/bin/kubelet \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--cert-dir=/var/lib/kubelet/pki/ \\
--rotate-certificates=true \\
--rotate-server-certificates=true \\
--network-plugin=cni \\
--register-node=true \\
--v=2
@ -327,20 +327,19 @@ Things to note here:
- **bootstrap-kubeconfig**: Location of the bootstrap-kubeconfig file.
- **cert-dir**: The directory where the generated certificates are stored.
- **rotate-certificates**: Rotates client certificates when they expire.
- **rotate-server-certificates**: Requests for server certificates on bootstrap and rotates them when they expire.
## Step 7 Configure the Kubernetes Proxy
In one of the previous steps we created the kube-proxy.kubeconfig file. Check [here](https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md) if you missed it.
```
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
worker-2$ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
```
Create the `kube-proxy-config.yaml` configuration file:
```
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
worker-2$ cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
@ -353,7 +352,7 @@ EOF
Create the `kube-proxy.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
worker-2$ cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
@ -371,6 +370,8 @@ EOF
## Step 8 Start the Worker Services
On worker-2:
```
{
sudo systemctl daemon-reload
@ -383,7 +384,7 @@ EOF
## Step 9 Approve Server CSR
`kubectl get csr`
`master-1$ kubectl get csr`
```
NAME AGE REQUESTOR CONDITION
@ -393,7 +394,9 @@ csr-95bv6 20s system:node:worker-
Approve
`kubectl certificate approve csr-95bv6`
`master-1$ kubectl certificate approve csr-95bv6`
Note: In the event your cluster persists for longer than 365 days, you will need to manually approve the replacement CSR.
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubectl-approval

View File

@ -29,9 +29,9 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
kubectl config use-context kubernetes-the-hard-way
}
```
Reference doc for kubectl config [here](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
```
## Verification

View File

@ -53,6 +53,6 @@ subjects:
name: kube-apiserver
EOF
```
Reference: https://v1-12.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
Reference: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
Next: [DNS Addon](14-dns-addon.md)

View File

@ -3,9 +3,9 @@
Install Go
```
wget https://dl.google.com/go/go1.12.1.linux-amd64.tar.gz
wget https://dl.google.com/go/go1.15.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.12.1.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.15.linux-amd64.tar.gz
export GOPATH="/home/vagrant/go"
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
```
@ -13,23 +13,23 @@ export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
## Install kubetest
```
go get -v -u k8s.io/test-infra/kubetest
git clone https://github.com/kubernetes/test-infra.git
cd test-infra/
GO111MODULE=on go install ./kubetest
```
> Note: This may take a few minutes depending on your network speed
## Extract the Version
## Use the version specific to your cluster
```
kubetest --extract=v1.13.0
K8S_VERSION=$(kubectl version -o json | jq -r '.serverVersion.gitVersion')
export KUBERNETES_CONFORMANCE_TEST=y
export KUBECONFIG="$HOME/.kube/config"
cd kubernetes
export KUBE_MASTER_IP="192.168.5.11:6443"
export KUBE_MASTER=master-1
kubetest --test --provider=skeleton --test_args="--ginkgo.focus=\[Conformance\]" | tee test.out
kubetest --provider=skeleton --test --test_args=”--ginkgo.focus=\[Conformance\]” --extract ${K8S_VERSION} | tee test.out
```

View File

@ -11,9 +11,18 @@ NODE_NAME="worker-1"; NODE_NAME="worker-1"; curl -sSL "https://localhost:6443/ap
kubectl -n kube-system create configmap nodes-config --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml
```
Edit node to use the dynamically created configuration
Edit `worker-1` node to use the dynamically created configuration
```
kubectl edit worker-2
master-1# kubectl edit node worker-1
```
Add the following YAML bit under `spec`:
```
configSource:
configMap:
name: CONFIG_MAP_NAME # replace CONFIG_MAP_NAME with the name of the ConfigMap
namespace: kube-system
kubeletConfigKey: kubelet
```
Configure Kubelet Service
@ -45,3 +54,5 @@ RestartSec=5
WantedBy=multi-user.target
EOF
```
Reference: https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/

View File

@ -5,7 +5,7 @@
Reference: https://github.com/etcd-io/etcd/releases
```
ETCD_VER=v3.3.13
ETCD_VER=v3.4.9
# choose either URL
GOOGLE_URL=https://storage.googleapis.com/etcd
@ -30,9 +30,15 @@ mv /tmp/etcd-download-test/etcdctl /usr/bin
```
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/snapshot-pre-boot.db
snapshot save /opt/snapshot-pre-boot.db
```
Note: In this case, the **ETCD** is running on the same server where we are running the commands (which is the *controlplane* node). As a result, the **--endpoint** argument is optional and can be ignored.
The options **--cert, --cacert and --key** are mandatory to authenticate to the ETCD server to take the backup.
If you want to take a backup of the ETCD service running on a different machine, you will have to provide the correct endpoint to that server (which is the IP Address and port of the etcd server with the **--endpoint** argument)
# -----------------------------
# Disaster Happens
# -----------------------------
@ -40,51 +46,34 @@ ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kuberne
# 3. Restore ETCD Snapshot to a new folder
```
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--name=master \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
--data-dir /var/lib/etcd-from-backup \
--initial-cluster=master=https://127.0.0.1:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls=https://127.0.0.1:2380 \
snapshot restore /tmp/snapshot-pre-boot.db
ETCDCTL_API=3 etcdctl --data-dir /var/lib/etcd-from-backup \
snapshot restore /opt/snapshot-pre-boot.db
```
Note: In this case, we are restoring the snapshot to a different directory but in the same server where we took the backup (**the controlplane node)**
As a result, the only required option for the restore command is the **--data-dir**.
# 4. Modify /etc/kubernetes/manifests/etcd.yaml
Update ETCD POD to use the new data directory and cluster token by modifying the pod definition file at `/etc/kubernetes/manifests/etcd.yaml`. When this file is updated, the ETCD pod is automatically re-created as this is a static pod placed under the `/etc/kubernetes/manifests` directory.
Update --data-dir to use new target location
We have now restored the etcd snapshot to a new path on the controlplane - **/var/lib/etcd-from-backup**, so, the only change to be made in the YAML file, is to change the hostPath for the volume called **etcd-data** from old directory (/var/lib/etcd) to the new directory **/var/lib/etcd-from-backup**.
```
--data-dir=/var/lib/etcd-from-backup
```
Update new initial-cluster-token to specify new cluster
```
--initial-cluster-token=etcd-cluster-1
```
Update volumes and volume mounts to point to new path
```
volumeMounts:
- mountPath: /var/lib/etcd-from-backup
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /var/lib/etcd-from-backup
type: DirectoryOrCreate
name: etcd-data
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
```
With this change, /var/lib/etcd on the **container** points to /var/lib/etcd-from-backup on the **controlplane** (which is what we want)
> Note: You don't really need to update data directory and volumeMounts.mountPath path above. You could simply just update the hostPath.path in the volumes section to point to the new directory. But if you are not working with a kubeadm deployed cluster, then you might have to update the data directory. That's why I left it as is.
When this file is updated, the ETCD pod is automatically re-created as this is a static pod placed under the `/etc/kubernetes/manifests` directory.
> Note: as the ETCD pod has changed it will automatically restart, and also kube-controller-manager and kube-scheduler. Wait 1-2 to mins for this pods to restart. You can make a `watch "docker ps | grep etcd"` to see when the ETCD pod is restarted.
> Note2: If the etcd pod is not getting `Ready 1/1`, then restart it by `kubectl delete pod -n kube-system etcd-controlplane` and wait 1 minute.
> Note3: This is the simplest way to make sure that ETCD uses the restored data after the ETCD pod is recreated. You **don't** have to change anything else.
**If** you do change **--data-dir** to **/var/lib/etcd-from-backup** in the YAML file, make sure that the **volumeMounts** for **etcd-data** is updated as well, with the mountPath pointing to /var/lib/etcd-from-backup (**THIS COMPLETE STEP IS OPTIONAL AND NEED NOT BE DONE FOR COMPLETING THE RESTORE**)

Binary file not shown.

View File

@ -310,8 +310,8 @@ check_cert_kpkubeconfig()
elif [ -f $KPKUBECONFIG ]
then
printf "${NC}kube-proxy kubeconfig file found, verifying the authenticity\n"
KPKUBECONFIG_SUBJECT=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ")
KPKUBECONFIG_ISSUER=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ")
KPKUBECONFIG_SUBJECT=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
KPKUBECONFIG_ISSUER=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
KPKUBECONFIG_CERT_MD5=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
KPKUBECONFIG_KEY_MD5=$(cat $KPKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
KPKUBECONFIG_SERVER=$(cat $KPKUBECONFIG | grep "server:"| awk '{print $2}')
@ -337,8 +337,8 @@ check_cert_kcmkubeconfig()
elif [ -f $KCMKUBECONFIG ]
then
printf "${NC}kube-controller-manager kubeconfig file found, verifying the authenticity\n"
KCMKUBECONFIG_SUBJECT=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ")
KCMKUBECONFIG_ISSUER=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ")
KCMKUBECONFIG_SUBJECT=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
KCMKUBECONFIG_ISSUER=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
KCMKUBECONFIG_CERT_MD5=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
KCMKUBECONFIG_KEY_MD5=$(cat $KCMKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
KCMKUBECONFIG_SERVER=$(cat $KCMKUBECONFIG | grep "server:"| awk '{print $2}')
@ -365,8 +365,8 @@ check_cert_kskubeconfig()
elif [ -f $KSKUBECONFIG ]
then
printf "${NC}kube-scheduler kubeconfig file found, verifying the authenticity\n"
KSKUBECONFIG_SUBJECT=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ")
KSKUBECONFIG_ISSUER=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ")
KSKUBECONFIG_SUBJECT=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
KSKUBECONFIG_ISSUER=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
KSKUBECONFIG_CERT_MD5=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
KSKUBECONFIG_KEY_MD5=$(cat $KSKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
KSKUBECONFIG_SERVER=$(cat $KSKUBECONFIG | grep "server:"| awk '{print $2}')
@ -392,8 +392,8 @@ check_cert_adminkubeconfig()
elif [ -f $ADMINKUBECONFIG ]
then
printf "${NC}admin kubeconfig file found, verifying the authenticity\n"
ADMINKUBECONFIG_SUBJECT=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ")
ADMINKUBECONFIG_ISSUER=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ")
ADMINKUBECONFIG_SUBJECT=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
ADMINKUBECONFIG_ISSUER=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
ADMINKUBECONFIG_CERT_MD5=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
ADMINKUBECONFIG_KEY_MD5=$(cat $ADMINKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
ADMINKUBECONFIG_SERVER=$(cat $ADMINKUBECONFIG | grep "server:"| awk '{print $2}')
@ -611,8 +611,8 @@ check_cert_worker_1_kubeconfig()
elif [ -f $WORKER_1_KUBECONFIG ]
then
printf "${NC}worker-1 kubeconfig file found, verifying the authenticity\n"
WORKER_1_KUBECONFIG_SUBJECT=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ")
WORKER_1_KUBECONFIG_ISSUER=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ")
WORKER_1_KUBECONFIG_SUBJECT=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
WORKER_1_KUBECONFIG_ISSUER=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
WORKER_1_KUBECONFIG_CERT_MD5=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
WORKER_1_KUBECONFIG_KEY_MD5=$(cat $WORKER_1_KUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
WORKER_1_KUBECONFIG_SERVER=$(cat $WORKER_1_KUBECONFIG | grep "server:"| awk '{print $2}')