Merge branch 'master' into missing-hyper-links
commit
dc2137710c
|
@ -62,7 +62,7 @@ data:
|
||||||
loadbalance
|
loadbalance
|
||||||
}
|
}
|
||||||
---
|
---
|
||||||
apiVersion: extensions/v1beta1
|
apiVersion: apps/v1
|
||||||
kind: Deployment
|
kind: Deployment
|
||||||
metadata:
|
metadata:
|
||||||
name: coredns
|
name: coredns
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
|
|
||||||
## VM Hardware Requirements
|
## VM Hardware Requirements
|
||||||
|
|
||||||
8 GB of RAM (Preferebly 16 GB)
|
8 GB of RAM (Preferably 16 GB)
|
||||||
50 GB Disk space
|
50 GB Disk space
|
||||||
|
|
||||||
## Virtual Box
|
## Virtual Box
|
||||||
|
@ -26,6 +26,3 @@ Download and Install [Vagrant](https://www.vagrantup.com/) on your platform.
|
||||||
- Centos
|
- Centos
|
||||||
- Linux
|
- Linux
|
||||||
- macOS
|
- macOS
|
||||||
- Arch Linux
|
|
||||||
|
|
||||||
Next: [Compute Resources](02-compute-resources.md)
|
|
|
@ -18,7 +18,9 @@ Run Vagrant up
|
||||||
This does the below:
|
This does the below:
|
||||||
|
|
||||||
- Deploys 5 VMs - 2 Master, 2 Worker and 1 Loadbalancer with the name 'kubernetes-ha-* '
|
- Deploys 5 VMs - 2 Master, 2 Worker and 1 Loadbalancer with the name 'kubernetes-ha-* '
|
||||||
> This is the default settings. This can be changed at the top of the Vagrant file
|
> This is the default settings. This can be changed at the top of the Vagrant file.
|
||||||
|
> If you choose to change these settings, please also update vagrant/ubuntu/vagrant/setup-hosts.sh
|
||||||
|
> to add the additional hosts to the /etc/hosts default before running "vagrant up".
|
||||||
|
|
||||||
- Set's IP addresses in the range 192.168.5
|
- Set's IP addresses in the range 192.168.5
|
||||||
|
|
||||||
|
@ -73,7 +75,7 @@ Vagrant generates a private key for each of these VMs. It is placed under the .v
|
||||||
|
|
||||||
## Troubleshooting Tips
|
## Troubleshooting Tips
|
||||||
|
|
||||||
If any of the VMs failed to provision, or is not configured correct, delete the vm using the command:
|
1. If any of the VMs failed to provision, or is not configured correct, delete the vm using the command:
|
||||||
|
|
||||||
`vagrant destroy <vm>`
|
`vagrant destroy <vm>`
|
||||||
|
|
||||||
|
@ -98,5 +100,3 @@ In such cases delete the VM, then delete the VM folder and then re-provision
|
||||||
|
|
||||||
`vagrant up`
|
`vagrant up`
|
||||||
|
|
||||||
|
|
||||||
Next: [Client Tools](03-client-tools.md)
|
|
|
@ -45,10 +45,9 @@ Results:
|
||||||
|
|
||||||
```
|
```
|
||||||
kube-proxy.kubeconfig
|
kube-proxy.kubeconfig
|
||||||
|
```
|
||||||
|
|
||||||
Reference docs for kube-proxy [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/)
|
Reference docs for kube-proxy [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/)
|
||||||
```
|
|
||||||
|
|
||||||
### The kube-controller-manager Kubernetes Configuration File
|
### The kube-controller-manager Kubernetes Configuration File
|
||||||
|
|
||||||
|
|
|
@ -39,6 +39,15 @@ for instance in master-1 master-2; do
|
||||||
scp encryption-config.yaml ${instance}:~/
|
scp encryption-config.yaml ${instance}:~/
|
||||||
done
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Move `encryption-config.yaml` encryption config file to appropriate directory.
|
||||||
|
|
||||||
|
```
|
||||||
|
for instance in master-1 master-2; do
|
||||||
|
ssh ${instance} sudo mv encryption-config.yaml /var/lib/kubernetes/
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
Reference: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data
|
Reference: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data
|
||||||
|
|
||||||
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
|
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
|
||||||
|
|
|
@ -8,7 +8,7 @@ The commands in this lab must be run on each controller instance: `master-1`, an
|
||||||
|
|
||||||
### Running commands in parallel with tmux
|
### Running commands in parallel with tmux
|
||||||
|
|
||||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
|
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time.
|
||||||
|
|
||||||
## Bootstrapping an etcd Cluster Member
|
## Bootstrapping an etcd Cluster Member
|
||||||
|
|
||||||
|
|
|
@ -78,7 +78,7 @@ Documentation=https://github.com/kubernetes/kubernetes
|
||||||
ExecStart=/usr/local/bin/kube-apiserver \\
|
ExecStart=/usr/local/bin/kube-apiserver \\
|
||||||
--advertise-address=${INTERNAL_IP} \\
|
--advertise-address=${INTERNAL_IP} \\
|
||||||
--allow-privileged=true \\
|
--allow-privileged=true \\
|
||||||
--apiserver-count=3 \\
|
--apiserver-count=2 \\
|
||||||
--audit-log-maxage=30 \\
|
--audit-log-maxage=30 \\
|
||||||
--audit-log-maxbackup=3 \\
|
--audit-log-maxbackup=3 \\
|
||||||
--audit-log-maxsize=100 \\
|
--audit-log-maxsize=100 \\
|
||||||
|
@ -99,7 +99,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\
|
||||||
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
|
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
|
||||||
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
|
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
|
||||||
--kubelet-https=true \\
|
--kubelet-https=true \\
|
||||||
--runtime-config=api/all \\
|
--runtime-config=api/all=true \\
|
||||||
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
|
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
|
||||||
--service-cluster-ip-range=10.96.0.0/24 \\
|
--service-cluster-ip-range=10.96.0.0/24 \\
|
||||||
--service-node-port-range=30000-32767 \\
|
--service-node-port-range=30000-32767 \\
|
||||||
|
|
|
@ -14,7 +14,11 @@ This is not a practical approach when you have 1000s of nodes in the cluster, an
|
||||||
- The Nodes can retrieve the signed certificate from the Kubernetes CA
|
- The Nodes can retrieve the signed certificate from the Kubernetes CA
|
||||||
- The Nodes can generate a kube-config file using this certificate by themselves
|
- The Nodes can generate a kube-config file using this certificate by themselves
|
||||||
- The Nodes can start and join the cluster by themselves
|
- The Nodes can start and join the cluster by themselves
|
||||||
- The Nodes can renew certificates when they expire by themselves
|
- The Nodes can request new certificates via a CSR, but the CSR must be manually approved by a cluster administrator
|
||||||
|
|
||||||
|
In Kubernetes 1.11 a patch was merged to require administrator or Controller approval of node serving CSRs for security reasons.
|
||||||
|
|
||||||
|
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#certificate-rotation
|
||||||
|
|
||||||
So let's get started!
|
So let's get started!
|
||||||
|
|
||||||
|
@ -39,16 +43,13 @@ So let's get started!
|
||||||
|
|
||||||
Copy the ca certificate to the worker node:
|
Copy the ca certificate to the worker node:
|
||||||
|
|
||||||
```
|
|
||||||
scp ca.crt worker-2:~/
|
|
||||||
```
|
|
||||||
|
|
||||||
## Step 1 Configure the Binaries on the Worker node
|
## Step 1 Configure the Binaries on the Worker node
|
||||||
|
|
||||||
### Download and Install Worker Binaries
|
### Download and Install Worker Binaries
|
||||||
|
|
||||||
```
|
```
|
||||||
wget -q --show-progress --https-only --timestamping \
|
worker-2$ wget -q --show-progress --https-only --timestamping \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \
|
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy \
|
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy \
|
||||||
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubelet
|
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubelet
|
||||||
|
@ -59,7 +60,7 @@ Reference: https://kubernetes.io/docs/setup/release/#node-binaries
|
||||||
Create the installation directories:
|
Create the installation directories:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo mkdir -p \
|
worker-2$ sudo mkdir -p \
|
||||||
/etc/cni/net.d \
|
/etc/cni/net.d \
|
||||||
/opt/cni/bin \
|
/opt/cni/bin \
|
||||||
/var/lib/kubelet \
|
/var/lib/kubelet \
|
||||||
|
@ -78,7 +79,7 @@ Install the worker binaries:
|
||||||
```
|
```
|
||||||
### Move the ca certificate
|
### Move the ca certificate
|
||||||
|
|
||||||
`sudo mv ca.crt /var/lib/kubernetes/`
|
`worker-2$ sudo mv ca.crt /var/lib/kubernetes/`
|
||||||
|
|
||||||
# Step 1 Create the Boostrap Token to be used by Nodes(Kubelets) to invoke Certificate API
|
# Step 1 Create the Boostrap Token to be used by Nodes(Kubelets) to invoke Certificate API
|
||||||
|
|
||||||
|
@ -86,10 +87,10 @@ For the workers(kubelet) to access the Certificates API, they need to authentica
|
||||||
|
|
||||||
Bootstrap Tokens take the form of a 6 character token id followed by 16 character token secret separated by a dot. Eg: abcdef.0123456789abcdef. More formally, they must match the regular expression [a-z0-9]{6}\.[a-z0-9]{16}
|
Bootstrap Tokens take the form of a 6 character token id followed by 16 character token secret separated by a dot. Eg: abcdef.0123456789abcdef. More formally, they must match the regular expression [a-z0-9]{6}\.[a-z0-9]{16}
|
||||||
|
|
||||||
Bootstrap Tokens are created as a secret in the kube-system namespace.
|
|
||||||
|
|
||||||
```
|
```
|
||||||
cat > bootstrap-token-07401b.yaml <<EOF
|
master-1$ cat > bootstrap-token-07401b.yaml <<EOF
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Secret
|
kind: Secret
|
||||||
metadata:
|
metadata:
|
||||||
|
@ -119,7 +120,7 @@ stringData:
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
|
|
||||||
kubectl create -f bootstrap-token-07401b.yaml
|
master-1$ kubectl create -f bootstrap-token-07401b.yaml
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -136,11 +137,11 @@ Reference: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tok
|
||||||
Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet
|
Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl create clusterrolebinding create-csrs-for-bootstrapping --clusterrole=system:node-bootstrapper --group=system:bootstrappers
|
master-1$ kubectl create clusterrolebinding create-csrs-for-bootstrapping --clusterrole=system:node-bootstrapper --group=system:bootstrappers
|
||||||
|
|
||||||
--------------- OR ---------------
|
--------------- OR ---------------
|
||||||
|
|
||||||
cat > csrs-for-bootstrapping.yaml <<EOF
|
master-1$ cat > csrs-for-bootstrapping.yaml <<EOF
|
||||||
# enable bootstrapping nodes to create CSR
|
# enable bootstrapping nodes to create CSR
|
||||||
kind: ClusterRoleBinding
|
kind: ClusterRoleBinding
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
@ -157,18 +158,18 @@ roleRef:
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
|
|
||||||
kubectl create -f csrs-for-bootstrapping.yaml
|
master-1$ kubectl create -f csrs-for-bootstrapping.yaml
|
||||||
|
|
||||||
```
|
```
|
||||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#authorize-kubelet-to-create-csr
|
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#authorize-kubelet-to-create-csr
|
||||||
|
|
||||||
## Step 3 Authorize workers(kubelets) to approve CSR
|
## Step 3 Authorize workers(kubelets) to approve CSR
|
||||||
```
|
```
|
||||||
kubectl create clusterrolebinding auto-approve-csrs-for-group --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers
|
master-1$ kubectl create clusterrolebinding auto-approve-csrs-for-group --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers
|
||||||
|
|
||||||
--------------- OR ---------------
|
--------------- OR ---------------
|
||||||
|
|
||||||
cat > auto-approve-csrs-for-group.yaml <<EOF
|
master-1$ cat > auto-approve-csrs-for-group.yaml <<EOF
|
||||||
# Approve all CSRs for the group "system:bootstrappers"
|
# Approve all CSRs for the group "system:bootstrappers"
|
||||||
kind: ClusterRoleBinding
|
kind: ClusterRoleBinding
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
@ -185,7 +186,7 @@ roleRef:
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
|
|
||||||
kubectl create -f auto-approve-csrs-for-group.yaml
|
master-1$ kubectl create -f auto-approve-csrs-for-group.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval
|
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval
|
||||||
|
@ -195,11 +196,11 @@ Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kub
|
||||||
We now create the Cluster Role Binding required for the nodes to automatically renew the certificates on expiry. Note that we are NOT using the **system:bootstrappers** group here any more. Since by the renewal period, we believe the node would be bootstrapped and part of the cluster already. All nodes are part of the **system:nodes** group.
|
We now create the Cluster Role Binding required for the nodes to automatically renew the certificates on expiry. Note that we are NOT using the **system:bootstrappers** group here any more. Since by the renewal period, we believe the node would be bootstrapped and part of the cluster already. All nodes are part of the **system:nodes** group.
|
||||||
|
|
||||||
```
|
```
|
||||||
kubectl create clusterrolebinding auto-approve-renewals-for-nodes --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
|
master-1$ kubectl create clusterrolebinding auto-approve-renewals-for-nodes --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
|
||||||
|
|
||||||
--------------- OR ---------------
|
--------------- OR ---------------
|
||||||
|
|
||||||
cat > auto-approve-renewals-for-nodes.yaml <<EOF
|
master-1$ cat > auto-approve-renewals-for-nodes.yaml <<EOF
|
||||||
# Approve renewal CSRs for the group "system:nodes"
|
# Approve renewal CSRs for the group "system:nodes"
|
||||||
kind: ClusterRoleBinding
|
kind: ClusterRoleBinding
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
@ -216,7 +217,7 @@ roleRef:
|
||||||
EOF
|
EOF
|
||||||
|
|
||||||
|
|
||||||
kubectl create -f auto-approve-renewals-for-nodes.yaml
|
master-1$ kubectl create -f auto-approve-renewals-for-nodes.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval
|
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval
|
||||||
|
@ -231,7 +232,7 @@ Here, we don't have the certificates yet. So we cannot create a kubeconfig file.
|
||||||
This is to be done on the `worker-2` node.
|
This is to be done on the `worker-2` node.
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://192.168.5.30:6443' --certificate-authority=/var/lib/kubernetes/ca.crt
|
worker-2$ sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://192.168.5.30:6443' --certificate-authority=/var/lib/kubernetes/ca.crt
|
||||||
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d
|
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d
|
||||||
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap
|
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap
|
||||||
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-context bootstrap
|
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-context bootstrap
|
||||||
|
@ -240,7 +241,7 @@ sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-conte
|
||||||
Or
|
Or
|
||||||
|
|
||||||
```
|
```
|
||||||
cat <<EOF | sudo tee /var/lib/kubelet/bootstrap-kubeconfig
|
worker-2$ cat <<EOF | sudo tee /var/lib/kubelet/bootstrap-kubeconfig
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
clusters:
|
clusters:
|
||||||
- cluster:
|
- cluster:
|
||||||
|
@ -269,7 +270,7 @@ Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kub
|
||||||
Create the `kubelet-config.yaml` configuration file:
|
Create the `kubelet-config.yaml` configuration file:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
worker-2$ cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
||||||
kind: KubeletConfiguration
|
kind: KubeletConfiguration
|
||||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||||
authentication:
|
authentication:
|
||||||
|
@ -296,7 +297,7 @@ EOF
|
||||||
Create the `kubelet.service` systemd unit file:
|
Create the `kubelet.service` systemd unit file:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
worker-2$ cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubernetes Kubelet
|
Description=Kubernetes Kubelet
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
@ -311,7 +312,6 @@ ExecStart=/usr/local/bin/kubelet \\
|
||||||
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
||||||
--cert-dir=/var/lib/kubelet/pki/ \\
|
--cert-dir=/var/lib/kubelet/pki/ \\
|
||||||
--rotate-certificates=true \\
|
--rotate-certificates=true \\
|
||||||
--rotate-server-certificates=true \\
|
|
||||||
--network-plugin=cni \\
|
--network-plugin=cni \\
|
||||||
--register-node=true \\
|
--register-node=true \\
|
||||||
--v=2
|
--v=2
|
||||||
|
@ -327,20 +327,19 @@ Things to note here:
|
||||||
- **bootstrap-kubeconfig**: Location of the bootstrap-kubeconfig file.
|
- **bootstrap-kubeconfig**: Location of the bootstrap-kubeconfig file.
|
||||||
- **cert-dir**: The directory where the generated certificates are stored.
|
- **cert-dir**: The directory where the generated certificates are stored.
|
||||||
- **rotate-certificates**: Rotates client certificates when they expire.
|
- **rotate-certificates**: Rotates client certificates when they expire.
|
||||||
- **rotate-server-certificates**: Requests for server certificates on bootstrap and rotates them when they expire.
|
|
||||||
|
|
||||||
## Step 7 Configure the Kubernetes Proxy
|
## Step 7 Configure the Kubernetes Proxy
|
||||||
|
|
||||||
In one of the previous steps we created the kube-proxy.kubeconfig file. Check [here](https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md) if you missed it.
|
In one of the previous steps we created the kube-proxy.kubeconfig file. Check [here](https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md) if you missed it.
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
worker-2$ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
||||||
```
|
```
|
||||||
|
|
||||||
Create the `kube-proxy-config.yaml` configuration file:
|
Create the `kube-proxy-config.yaml` configuration file:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
|
worker-2$ cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
|
||||||
kind: KubeProxyConfiguration
|
kind: KubeProxyConfiguration
|
||||||
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
||||||
clientConnection:
|
clientConnection:
|
||||||
|
@ -353,7 +352,7 @@ EOF
|
||||||
Create the `kube-proxy.service` systemd unit file:
|
Create the `kube-proxy.service` systemd unit file:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
|
worker-2$ cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
|
||||||
[Unit]
|
[Unit]
|
||||||
Description=Kubernetes Kube Proxy
|
Description=Kubernetes Kube Proxy
|
||||||
Documentation=https://github.com/kubernetes/kubernetes
|
Documentation=https://github.com/kubernetes/kubernetes
|
||||||
|
@ -371,6 +370,8 @@ EOF
|
||||||
|
|
||||||
## Step 8 Start the Worker Services
|
## Step 8 Start the Worker Services
|
||||||
|
|
||||||
|
On worker-2:
|
||||||
|
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
sudo systemctl daemon-reload
|
sudo systemctl daemon-reload
|
||||||
|
@ -383,7 +384,7 @@ EOF
|
||||||
|
|
||||||
## Step 9 Approve Server CSR
|
## Step 9 Approve Server CSR
|
||||||
|
|
||||||
`kubectl get csr`
|
`master-1$ kubectl get csr`
|
||||||
|
|
||||||
```
|
```
|
||||||
NAME AGE REQUESTOR CONDITION
|
NAME AGE REQUESTOR CONDITION
|
||||||
|
@ -393,7 +394,9 @@ csr-95bv6 20s system:node:worker-
|
||||||
|
|
||||||
Approve
|
Approve
|
||||||
|
|
||||||
`kubectl certificate approve csr-95bv6`
|
`master-1$ kubectl certificate approve csr-95bv6`
|
||||||
|
|
||||||
|
Note: In the event your cluster persists for longer than 365 days, you will need to manually approve the replacement CSR.
|
||||||
|
|
||||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubectl-approval
|
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubectl-approval
|
||||||
|
|
||||||
|
|
|
@ -29,9 +29,9 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
||||||
|
|
||||||
kubectl config use-context kubernetes-the-hard-way
|
kubectl config use-context kubernetes-the-hard-way
|
||||||
}
|
}
|
||||||
|
```
|
||||||
|
|
||||||
Reference doc for kubectl config [here](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
Reference doc for kubectl config [here](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
||||||
```
|
|
||||||
|
|
||||||
## Verification
|
## Verification
|
||||||
|
|
||||||
|
|
|
@ -32,9 +32,9 @@ EOF
|
||||||
```
|
```
|
||||||
Reference: https://v1-12.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole
|
Reference: https://v1-12.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole
|
||||||
|
|
||||||
The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
|
The Kubernetes API Server authenticates to the Kubelet as the `system:kube-apiserver` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
|
||||||
|
|
||||||
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
|
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `system:kube-apiserver` user:
|
||||||
|
|
||||||
```
|
```
|
||||||
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
||||||
|
@ -50,9 +50,9 @@ roleRef:
|
||||||
subjects:
|
subjects:
|
||||||
- apiGroup: rbac.authorization.k8s.io
|
- apiGroup: rbac.authorization.k8s.io
|
||||||
kind: User
|
kind: User
|
||||||
name: kube-apiserver
|
name: system:kube-apiserver
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
Reference: https://v1-12.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
|
Reference: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
|
||||||
|
|
||||||
Next: [DNS Addon](14-dns-addon.md)
|
Next: [DNS Addon](14-dns-addon.md)
|
||||||
|
|
|
@ -3,9 +3,9 @@
|
||||||
Install Go
|
Install Go
|
||||||
|
|
||||||
```
|
```
|
||||||
wget https://dl.google.com/go/go1.12.1.linux-amd64.tar.gz
|
wget https://dl.google.com/go/go1.15.linux-amd64.tar.gz
|
||||||
|
|
||||||
sudo tar -C /usr/local -xzf go1.12.1.linux-amd64.tar.gz
|
sudo tar -C /usr/local -xzf go1.15.linux-amd64.tar.gz
|
||||||
export GOPATH="/home/vagrant/go"
|
export GOPATH="/home/vagrant/go"
|
||||||
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
|
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
|
||||||
```
|
```
|
||||||
|
|
|
@ -11,9 +11,18 @@ NODE_NAME="worker-1"; NODE_NAME="worker-1"; curl -sSL "https://localhost:6443/ap
|
||||||
kubectl -n kube-system create configmap nodes-config --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml
|
kubectl -n kube-system create configmap nodes-config --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
Edit node to use the dynamically created configuration
|
Edit `worker-1` node to use the dynamically created configuration
|
||||||
```
|
```
|
||||||
kubectl edit worker-2
|
master-1# kubectl edit node worker-1
|
||||||
|
```
|
||||||
|
|
||||||
|
Add the following YAML bit under `spec`:
|
||||||
|
```
|
||||||
|
configSource:
|
||||||
|
configMap:
|
||||||
|
name: CONFIG_MAP_NAME # replace CONFIG_MAP_NAME with the name of the ConfigMap
|
||||||
|
namespace: kube-system
|
||||||
|
kubeletConfigKey: kubelet
|
||||||
```
|
```
|
||||||
|
|
||||||
Configure Kubelet Service
|
Configure Kubelet Service
|
||||||
|
@ -45,3 +54,5 @@ RestartSec=5
|
||||||
WantedBy=multi-user.target
|
WantedBy=multi-user.target
|
||||||
EOF
|
EOF
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Reference: https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
Reference: https://github.com/etcd-io/etcd/releases
|
Reference: https://github.com/etcd-io/etcd/releases
|
||||||
|
|
||||||
```
|
```
|
||||||
ETCD_VER=v3.3.13
|
ETCD_VER=v3.4.9
|
||||||
|
|
||||||
# choose either URL
|
# choose either URL
|
||||||
GOOGLE_URL=https://storage.googleapis.com/etcd
|
GOOGLE_URL=https://storage.googleapis.com/etcd
|
||||||
|
@ -30,9 +30,15 @@ mv /tmp/etcd-download-test/etcdctl /usr/bin
|
||||||
```
|
```
|
||||||
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
|
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
|
||||||
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
|
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
|
||||||
snapshot save /tmp/snapshot-pre-boot.db
|
snapshot save /opt/snapshot-pre-boot.db
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Note: In this case, the **ETCD** is running on the same server where we are running the commands (which is the *controlplane* node). As a result, the **--endpoint** argument is optional and can be ignored.
|
||||||
|
|
||||||
|
The options **--cert, --cacert and --key** are mandatory to authenticate to the ETCD server to take the backup.
|
||||||
|
|
||||||
|
If you want to take a backup of the ETCD service running on a different machine, you will have to provide the correct endpoint to that server (which is the IP Address and port of the etcd server with the **--endpoint** argument)
|
||||||
|
|
||||||
# -----------------------------
|
# -----------------------------
|
||||||
# Disaster Happens
|
# Disaster Happens
|
||||||
# -----------------------------
|
# -----------------------------
|
||||||
|
@ -40,51 +46,34 @@ ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kuberne
|
||||||
# 3. Restore ETCD Snapshot to a new folder
|
# 3. Restore ETCD Snapshot to a new folder
|
||||||
|
|
||||||
```
|
```
|
||||||
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
|
ETCDCTL_API=3 etcdctl --data-dir /var/lib/etcd-from-backup \
|
||||||
--name=master \
|
snapshot restore /opt/snapshot-pre-boot.db
|
||||||
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
|
|
||||||
--data-dir /var/lib/etcd-from-backup \
|
|
||||||
--initial-cluster=master=https://127.0.0.1:2380 \
|
|
||||||
--initial-cluster-token=etcd-cluster-1 \
|
|
||||||
--initial-advertise-peer-urls=https://127.0.0.1:2380 \
|
|
||||||
snapshot restore /tmp/snapshot-pre-boot.db
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Note: In this case, we are restoring the snapshot to a different directory but in the same server where we took the backup (**the controlplane node)**
|
||||||
|
As a result, the only required option for the restore command is the **--data-dir**.
|
||||||
|
|
||||||
# 4. Modify /etc/kubernetes/manifests/etcd.yaml
|
# 4. Modify /etc/kubernetes/manifests/etcd.yaml
|
||||||
|
|
||||||
Update ETCD POD to use the new data directory and cluster token by modifying the pod definition file at `/etc/kubernetes/manifests/etcd.yaml`. When this file is updated, the ETCD pod is automatically re-created as this is a static pod placed under the `/etc/kubernetes/manifests` directory.
|
We have now restored the etcd snapshot to a new path on the controlplane - **/var/lib/etcd-from-backup**, so, the only change to be made in the YAML file, is to change the hostPath for the volume called **etcd-data** from old directory (/var/lib/etcd) to the new directory **/var/lib/etcd-from-backup**.
|
||||||
|
|
||||||
Update --data-dir to use new target location
|
|
||||||
|
|
||||||
```
|
```
|
||||||
--data-dir=/var/lib/etcd-from-backup
|
|
||||||
```
|
|
||||||
|
|
||||||
Update new initial-cluster-token to specify new cluster
|
|
||||||
|
|
||||||
```
|
|
||||||
--initial-cluster-token=etcd-cluster-1
|
|
||||||
```
|
|
||||||
|
|
||||||
Update volumes and volume mounts to point to new path
|
|
||||||
|
|
||||||
```
|
|
||||||
volumeMounts:
|
|
||||||
- mountPath: /var/lib/etcd-from-backup
|
|
||||||
name: etcd-data
|
|
||||||
- mountPath: /etc/kubernetes/pki/etcd
|
|
||||||
name: etcd-certs
|
|
||||||
hostNetwork: true
|
|
||||||
priorityClassName: system-cluster-critical
|
|
||||||
volumes:
|
volumes:
|
||||||
- hostPath:
|
- hostPath:
|
||||||
path: /var/lib/etcd-from-backup
|
path: /var/lib/etcd-from-backup
|
||||||
type: DirectoryOrCreate
|
type: DirectoryOrCreate
|
||||||
name: etcd-data
|
name: etcd-data
|
||||||
- hostPath:
|
|
||||||
path: /etc/kubernetes/pki/etcd
|
|
||||||
type: DirectoryOrCreate
|
|
||||||
name: etcd-certs
|
|
||||||
```
|
```
|
||||||
|
With this change, /var/lib/etcd on the **container** points to /var/lib/etcd-from-backup on the **controlplane** (which is what we want)
|
||||||
|
|
||||||
> Note: You don't really need to update data directory and volumeMounts.mountPath path above. You could simply just update the hostPath.path in the volumes section to point to the new directory. But if you are not working with a kubeadm deployed cluster, then you might have to update the data directory. That's why I left it as is.
|
|
||||||
|
When this file is updated, the ETCD pod is automatically re-created as this is a static pod placed under the `/etc/kubernetes/manifests` directory.
|
||||||
|
|
||||||
|
|
||||||
|
> Note: as the ETCD pod has changed it will automatically restart, and also kube-controller-manager and kube-scheduler. Wait 1-2 to mins for this pods to restart. You can make a `watch "docker ps | grep etcd"` to see when the ETCD pod is restarted.
|
||||||
|
|
||||||
|
> Note2: If the etcd pod is not getting `Ready 1/1`, then restart it by `kubectl delete pod -n kube-system etcd-controlplane` and wait 1 minute.
|
||||||
|
|
||||||
|
> Note3: This is the simplest way to make sure that ETCD uses the restored data after the ETCD pod is recreated. You **don't** have to change anything else.
|
||||||
|
|
||||||
|
**If** you do change **--data-dir** to **/var/lib/etcd-from-backup** in the YAML file, make sure that the **volumeMounts** for **etcd-data** is updated as well, with the mountPath pointing to /var/lib/etcd-from-backup (**THIS COMPLETE STEP IS OPTIONAL AND NEED NOT BE DONE FOR COMPLETING THE RESTORE**)
|
||||||
|
|
Binary file not shown.
|
@ -310,8 +310,8 @@ check_cert_kpkubeconfig()
|
||||||
elif [ -f $KPKUBECONFIG ]
|
elif [ -f $KPKUBECONFIG ]
|
||||||
then
|
then
|
||||||
printf "${NC}kube-proxy kubeconfig file found, verifying the authenticity\n"
|
printf "${NC}kube-proxy kubeconfig file found, verifying the authenticity\n"
|
||||||
KPKUBECONFIG_SUBJECT=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ")
|
KPKUBECONFIG_SUBJECT=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
|
||||||
KPKUBECONFIG_ISSUER=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ")
|
KPKUBECONFIG_ISSUER=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
|
||||||
KPKUBECONFIG_CERT_MD5=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
KPKUBECONFIG_CERT_MD5=$(cat $KPKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
||||||
KPKUBECONFIG_KEY_MD5=$(cat $KPKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
KPKUBECONFIG_KEY_MD5=$(cat $KPKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
||||||
KPKUBECONFIG_SERVER=$(cat $KPKUBECONFIG | grep "server:"| awk '{print $2}')
|
KPKUBECONFIG_SERVER=$(cat $KPKUBECONFIG | grep "server:"| awk '{print $2}')
|
||||||
|
@ -337,8 +337,8 @@ check_cert_kcmkubeconfig()
|
||||||
elif [ -f $KCMKUBECONFIG ]
|
elif [ -f $KCMKUBECONFIG ]
|
||||||
then
|
then
|
||||||
printf "${NC}kube-controller-manager kubeconfig file found, verifying the authenticity\n"
|
printf "${NC}kube-controller-manager kubeconfig file found, verifying the authenticity\n"
|
||||||
KCMKUBECONFIG_SUBJECT=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ")
|
KCMKUBECONFIG_SUBJECT=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
|
||||||
KCMKUBECONFIG_ISSUER=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ")
|
KCMKUBECONFIG_ISSUER=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
|
||||||
KCMKUBECONFIG_CERT_MD5=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
KCMKUBECONFIG_CERT_MD5=$(cat $KCMKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
||||||
KCMKUBECONFIG_KEY_MD5=$(cat $KCMKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
KCMKUBECONFIG_KEY_MD5=$(cat $KCMKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
||||||
KCMKUBECONFIG_SERVER=$(cat $KCMKUBECONFIG | grep "server:"| awk '{print $2}')
|
KCMKUBECONFIG_SERVER=$(cat $KCMKUBECONFIG | grep "server:"| awk '{print $2}')
|
||||||
|
@ -365,8 +365,8 @@ check_cert_kskubeconfig()
|
||||||
elif [ -f $KSKUBECONFIG ]
|
elif [ -f $KSKUBECONFIG ]
|
||||||
then
|
then
|
||||||
printf "${NC}kube-scheduler kubeconfig file found, verifying the authenticity\n"
|
printf "${NC}kube-scheduler kubeconfig file found, verifying the authenticity\n"
|
||||||
KSKUBECONFIG_SUBJECT=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ")
|
KSKUBECONFIG_SUBJECT=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
|
||||||
KSKUBECONFIG_ISSUER=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ")
|
KSKUBECONFIG_ISSUER=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
|
||||||
KSKUBECONFIG_CERT_MD5=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
KSKUBECONFIG_CERT_MD5=$(cat $KSKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
||||||
KSKUBECONFIG_KEY_MD5=$(cat $KSKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
KSKUBECONFIG_KEY_MD5=$(cat $KSKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
||||||
KSKUBECONFIG_SERVER=$(cat $KSKUBECONFIG | grep "server:"| awk '{print $2}')
|
KSKUBECONFIG_SERVER=$(cat $KSKUBECONFIG | grep "server:"| awk '{print $2}')
|
||||||
|
@ -392,8 +392,8 @@ check_cert_adminkubeconfig()
|
||||||
elif [ -f $ADMINKUBECONFIG ]
|
elif [ -f $ADMINKUBECONFIG ]
|
||||||
then
|
then
|
||||||
printf "${NC}admin kubeconfig file found, verifying the authenticity\n"
|
printf "${NC}admin kubeconfig file found, verifying the authenticity\n"
|
||||||
ADMINKUBECONFIG_SUBJECT=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ")
|
ADMINKUBECONFIG_SUBJECT=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
|
||||||
ADMINKUBECONFIG_ISSUER=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ")
|
ADMINKUBECONFIG_ISSUER=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
|
||||||
ADMINKUBECONFIG_CERT_MD5=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
ADMINKUBECONFIG_CERT_MD5=$(cat $ADMINKUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
||||||
ADMINKUBECONFIG_KEY_MD5=$(cat $ADMINKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
ADMINKUBECONFIG_KEY_MD5=$(cat $ADMINKUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
||||||
ADMINKUBECONFIG_SERVER=$(cat $ADMINKUBECONFIG | grep "server:"| awk '{print $2}')
|
ADMINKUBECONFIG_SERVER=$(cat $ADMINKUBECONFIG | grep "server:"| awk '{print $2}')
|
||||||
|
@ -611,8 +611,8 @@ check_cert_worker_1_kubeconfig()
|
||||||
elif [ -f $WORKER_1_KUBECONFIG ]
|
elif [ -f $WORKER_1_KUBECONFIG ]
|
||||||
then
|
then
|
||||||
printf "${NC}worker-1 kubeconfig file found, verifying the authenticity\n"
|
printf "${NC}worker-1 kubeconfig file found, verifying the authenticity\n"
|
||||||
WORKER_1_KUBECONFIG_SUBJECT=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Subject: CN" | tr -d " ")
|
WORKER_1_KUBECONFIG_SUBJECT=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Subject: CN" | tr -d " ")
|
||||||
WORKER_1_KUBECONFIG_ISSUER=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 --text | grep "Issuer: CN" | tr -d " ")
|
WORKER_1_KUBECONFIG_ISSUER=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -text | grep "Issuer: CN" | tr -d " ")
|
||||||
WORKER_1_KUBECONFIG_CERT_MD5=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
WORKER_1_KUBECONFIG_CERT_MD5=$(cat $WORKER_1_KUBECONFIG | grep "client-certificate-data:" | awk '{print $2}' | base64 --decode | openssl x509 -noout | openssl md5 | awk '{print $2}')
|
||||||
WORKER_1_KUBECONFIG_KEY_MD5=$(cat $WORKER_1_KUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
WORKER_1_KUBECONFIG_KEY_MD5=$(cat $WORKER_1_KUBECONFIG | grep "client-key-data" | awk '{print $2}' | base64 --decode | openssl rsa -noout | openssl md5 | awk '{print $2}')
|
||||||
WORKER_1_KUBECONFIG_SERVER=$(cat $WORKER_1_KUBECONFIG | grep "server:"| awk '{print $2}')
|
WORKER_1_KUBECONFIG_SERVER=$(cat $WORKER_1_KUBECONFIG | grep "server:"| awk '{print $2}')
|
||||||
|
|
Loading…
Reference in New Issue