Merge branch 'master' into missing-hyper-links

This commit is contained in:
Mohamed Ayman
2021-04-19 00:06:18 +02:00
committed by GitHub
15 changed files with 109 additions and 101 deletions

View File

@@ -2,7 +2,7 @@
## VM Hardware Requirements
8 GB of RAM (Preferebly 16 GB)
8 GB of RAM (Preferably 16 GB)
50 GB Disk space
## Virtual Box
@@ -26,6 +26,3 @@ Download and Install [Vagrant](https://www.vagrantup.com/) on your platform.
- Centos
- Linux
- macOS
- Arch Linux
Next: [Compute Resources](02-compute-resources.md)

View File

@@ -18,7 +18,9 @@ Run Vagrant up
This does the below:
- Deploys 5 VMs - 2 Master, 2 Worker and 1 Loadbalancer with the name 'kubernetes-ha-* '
> This is the default settings. This can be changed at the top of the Vagrant file
> This is the default settings. This can be changed at the top of the Vagrant file.
> If you choose to change these settings, please also update vagrant/ubuntu/vagrant/setup-hosts.sh
> to add the additional hosts to the /etc/hosts default before running "vagrant up".
- Set's IP addresses in the range 192.168.5
@@ -73,7 +75,7 @@ Vagrant generates a private key for each of these VMs. It is placed under the .v
## Troubleshooting Tips
If any of the VMs failed to provision, or is not configured correct, delete the vm using the command:
1. If any of the VMs failed to provision, or is not configured correct, delete the vm using the command:
`vagrant destroy <vm>`
@@ -97,6 +99,4 @@ In such cases delete the VM, then delete the VM folder and then re-provision
`rmdir "<path-to-vm-folder>\kubernetes-ha-worker-2"`
`vagrant up`
Next: [Client Tools](03-client-tools.md)

View File

@@ -45,10 +45,9 @@ Results:
```
kube-proxy.kubeconfig
```
Reference docs for kube-proxy [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/)
```
### The kube-controller-manager Kubernetes Configuration File

View File

@@ -39,6 +39,15 @@ for instance in master-1 master-2; do
scp encryption-config.yaml ${instance}:~/
done
```
Move `encryption-config.yaml` encryption config file to appropriate directory.
```
for instance in master-1 master-2; do
ssh ${instance} sudo mv encryption-config.yaml /var/lib/kubernetes/
done
```
Reference: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)

View File

@@ -8,7 +8,7 @@ The commands in this lab must be run on each controller instance: `master-1`, an
### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time.
## Bootstrapping an etcd Cluster Member

View File

@@ -78,7 +78,7 @@ Documentation=https://github.com/kubernetes/kubernetes
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--apiserver-count=2 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
@@ -99,7 +99,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
--kubelet-https=true \\
--runtime-config=api/all \\
--runtime-config=api/all=true \\
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
--service-cluster-ip-range=10.96.0.0/24 \\
--service-node-port-range=30000-32767 \\

View File

@@ -14,7 +14,11 @@ This is not a practical approach when you have 1000s of nodes in the cluster, an
- The Nodes can retrieve the signed certificate from the Kubernetes CA
- The Nodes can generate a kube-config file using this certificate by themselves
- The Nodes can start and join the cluster by themselves
- The Nodes can renew certificates when they expire by themselves
- The Nodes can request new certificates via a CSR, but the CSR must be manually approved by a cluster administrator
In Kubernetes 1.11 a patch was merged to require administrator or Controller approval of node serving CSRs for security reasons.
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#certificate-rotation
So let's get started!
@@ -39,16 +43,13 @@ So let's get started!
Copy the ca certificate to the worker node:
```
scp ca.crt worker-2:~/
```
## Step 1 Configure the Binaries on the Worker node
### Download and Install Worker Binaries
```
wget -q --show-progress --https-only --timestamping \
worker-2$ wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubelet
@@ -59,7 +60,7 @@ Reference: https://kubernetes.io/docs/setup/release/#node-binaries
Create the installation directories:
```
sudo mkdir -p \
worker-2$ sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
@@ -78,7 +79,7 @@ Install the worker binaries:
```
### Move the ca certificate
`sudo mv ca.crt /var/lib/kubernetes/`
`worker-2$ sudo mv ca.crt /var/lib/kubernetes/`
# Step 1 Create the Boostrap Token to be used by Nodes(Kubelets) to invoke Certificate API
@@ -86,10 +87,10 @@ For the workers(kubelet) to access the Certificates API, they need to authentica
Bootstrap Tokens take the form of a 6 character token id followed by 16 character token secret separated by a dot. Eg: abcdef.0123456789abcdef. More formally, they must match the regular expression [a-z0-9]{6}\.[a-z0-9]{16}
Bootstrap Tokens are created as a secret in the kube-system namespace.
```
cat > bootstrap-token-07401b.yaml <<EOF
master-1$ cat > bootstrap-token-07401b.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
@@ -119,7 +120,7 @@ stringData:
EOF
kubectl create -f bootstrap-token-07401b.yaml
master-1$ kubectl create -f bootstrap-token-07401b.yaml
```
@@ -136,11 +137,11 @@ Reference: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tok
Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet
```
kubectl create clusterrolebinding create-csrs-for-bootstrapping --clusterrole=system:node-bootstrapper --group=system:bootstrappers
master-1$ kubectl create clusterrolebinding create-csrs-for-bootstrapping --clusterrole=system:node-bootstrapper --group=system:bootstrappers
--------------- OR ---------------
cat > csrs-for-bootstrapping.yaml <<EOF
master-1$ cat > csrs-for-bootstrapping.yaml <<EOF
# enable bootstrapping nodes to create CSR
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
@@ -157,18 +158,18 @@ roleRef:
EOF
kubectl create -f csrs-for-bootstrapping.yaml
master-1$ kubectl create -f csrs-for-bootstrapping.yaml
```
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#authorize-kubelet-to-create-csr
## Step 3 Authorize workers(kubelets) to approve CSR
```
kubectl create clusterrolebinding auto-approve-csrs-for-group --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers
master-1$ kubectl create clusterrolebinding auto-approve-csrs-for-group --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers
--------------- OR ---------------
cat > auto-approve-csrs-for-group.yaml <<EOF
master-1$ cat > auto-approve-csrs-for-group.yaml <<EOF
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
@@ -185,7 +186,7 @@ roleRef:
EOF
kubectl create -f auto-approve-csrs-for-group.yaml
master-1$ kubectl create -f auto-approve-csrs-for-group.yaml
```
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval
@@ -195,11 +196,11 @@ Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kub
We now create the Cluster Role Binding required for the nodes to automatically renew the certificates on expiry. Note that we are NOT using the **system:bootstrappers** group here any more. Since by the renewal period, we believe the node would be bootstrapped and part of the cluster already. All nodes are part of the **system:nodes** group.
```
kubectl create clusterrolebinding auto-approve-renewals-for-nodes --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
master-1$ kubectl create clusterrolebinding auto-approve-renewals-for-nodes --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
--------------- OR ---------------
cat > auto-approve-renewals-for-nodes.yaml <<EOF
master-1$ cat > auto-approve-renewals-for-nodes.yaml <<EOF
# Approve renewal CSRs for the group "system:nodes"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
@@ -216,7 +217,7 @@ roleRef:
EOF
kubectl create -f auto-approve-renewals-for-nodes.yaml
master-1$ kubectl create -f auto-approve-renewals-for-nodes.yaml
```
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval
@@ -231,7 +232,7 @@ Here, we don't have the certificates yet. So we cannot create a kubeconfig file.
This is to be done on the `worker-2` node.
```
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://192.168.5.30:6443' --certificate-authority=/var/lib/kubernetes/ca.crt
worker-2$ sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://192.168.5.30:6443' --certificate-authority=/var/lib/kubernetes/ca.crt
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-context bootstrap
@@ -240,7 +241,7 @@ sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-conte
Or
```
cat <<EOF | sudo tee /var/lib/kubelet/bootstrap-kubeconfig
worker-2$ cat <<EOF | sudo tee /var/lib/kubelet/bootstrap-kubeconfig
apiVersion: v1
clusters:
- cluster:
@@ -269,7 +270,7 @@ Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kub
Create the `kubelet-config.yaml` configuration file:
```
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
worker-2$ cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
@@ -296,7 +297,7 @@ EOF
Create the `kubelet.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
worker-2$ cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
@@ -311,7 +312,6 @@ ExecStart=/usr/local/bin/kubelet \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--cert-dir=/var/lib/kubelet/pki/ \\
--rotate-certificates=true \\
--rotate-server-certificates=true \\
--network-plugin=cni \\
--register-node=true \\
--v=2
@@ -327,20 +327,19 @@ Things to note here:
- **bootstrap-kubeconfig**: Location of the bootstrap-kubeconfig file.
- **cert-dir**: The directory where the generated certificates are stored.
- **rotate-certificates**: Rotates client certificates when they expire.
- **rotate-server-certificates**: Requests for server certificates on bootstrap and rotates them when they expire.
## Step 7 Configure the Kubernetes Proxy
In one of the previous steps we created the kube-proxy.kubeconfig file. Check [here](https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md) if you missed it.
```
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
worker-2$ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
```
Create the `kube-proxy-config.yaml` configuration file:
```
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
worker-2$ cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
@@ -353,7 +352,7 @@ EOF
Create the `kube-proxy.service` systemd unit file:
```
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
worker-2$ cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
@@ -371,6 +370,8 @@ EOF
## Step 8 Start the Worker Services
On worker-2:
```
{
sudo systemctl daemon-reload
@@ -383,7 +384,7 @@ EOF
## Step 9 Approve Server CSR
`kubectl get csr`
`master-1$ kubectl get csr`
```
NAME AGE REQUESTOR CONDITION
@@ -393,7 +394,9 @@ csr-95bv6 20s system:node:worker-
Approve
`kubectl certificate approve csr-95bv6`
`master-1$ kubectl certificate approve csr-95bv6`
Note: In the event your cluster persists for longer than 365 days, you will need to manually approve the replacement CSR.
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubectl-approval

View File

@@ -29,9 +29,9 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
kubectl config use-context kubernetes-the-hard-way
}
```
Reference doc for kubectl config [here](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
```
## Verification

View File

@@ -32,9 +32,9 @@ EOF
```
Reference: https://v1-12.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole
The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
The Kubernetes API Server authenticates to the Kubelet as the `system:kube-apiserver` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `system:kube-apiserver` user:
```
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
@@ -50,9 +50,9 @@ roleRef:
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kube-apiserver
name: system:kube-apiserver
EOF
```
Reference: https://v1-12.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
Reference: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
Next: [DNS Addon](14-dns-addon.md)

View File

@@ -3,9 +3,9 @@
Install Go
```
wget https://dl.google.com/go/go1.12.1.linux-amd64.tar.gz
wget https://dl.google.com/go/go1.15.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.12.1.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.15.linux-amd64.tar.gz
export GOPATH="/home/vagrant/go"
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
```

View File

@@ -11,9 +11,18 @@ NODE_NAME="worker-1"; NODE_NAME="worker-1"; curl -sSL "https://localhost:6443/ap
kubectl -n kube-system create configmap nodes-config --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml
```
Edit node to use the dynamically created configuration
Edit `worker-1` node to use the dynamically created configuration
```
kubectl edit worker-2
master-1# kubectl edit node worker-1
```
Add the following YAML bit under `spec`:
```
configSource:
configMap:
name: CONFIG_MAP_NAME # replace CONFIG_MAP_NAME with the name of the ConfigMap
namespace: kube-system
kubeletConfigKey: kubelet
```
Configure Kubelet Service
@@ -45,3 +54,5 @@ RestartSec=5
WantedBy=multi-user.target
EOF
```
Reference: https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/