mirror of
https://github.com/kelseyhightower/kubernetes-the-hard-way.git
synced 2025-08-08 20:02:42 +03:00
Merge branch 'master' into patch-1
This commit is contained in:
@@ -2,7 +2,7 @@
|
||||
|
||||
## VM Hardware Requirements
|
||||
|
||||
8 GB of RAM (Preferebly 16 GB)
|
||||
8 GB of RAM (Preferably 16 GB)
|
||||
50 GB Disk space
|
||||
|
||||
## Virtual Box
|
||||
@@ -28,14 +28,3 @@ Download and Install [Vagrant](https://www.vagrantup.com/) on your platform.
|
||||
- macOS
|
||||
- Arch Linux
|
||||
|
||||
## Running Commands in Parallel with tmux
|
||||
|
||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with synchronize-panes enabled to speed up the provisioning process.
|
||||
|
||||
> The use of tmux is optional and not required to complete this tutorial.
|
||||
|
||||

|
||||
|
||||
> Enable synchronize-panes by pressing `ctrl+b` followed by `shift+:`. Next type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
|
||||
|
||||
Next: [Installing the Client Tools](02-client-tools.md)
|
||||
|
@@ -18,7 +18,9 @@ Run Vagrant up
|
||||
This does the below:
|
||||
|
||||
- Deploys 5 VMs - 2 Master, 2 Worker and 1 Loadbalancer with the name 'kubernetes-ha-* '
|
||||
> This is the default settings. This can be changed at the top of the Vagrant file
|
||||
> This is the default settings. This can be changed at the top of the Vagrant file.
|
||||
> If you choose to change these settings, please also update vagrant/ubuntu/vagrant/setup-hosts.sh
|
||||
> to add the additional hosts to the /etc/hosts default before running "vagrant up".
|
||||
|
||||
- Set's IP addresses in the range 192.168.5
|
||||
|
||||
@@ -73,7 +75,7 @@ Vagrant generates a private key for each of these VMs. It is placed under the .v
|
||||
|
||||
## Troubleshooting Tips
|
||||
|
||||
If any of the VMs failed to provision, or is not configured correct, delete the vm using the command:
|
||||
1. If any of the VMs failed to provision, or is not configured correct, delete the vm using the command:
|
||||
|
||||
`vagrant destroy <vm>`
|
||||
|
||||
@@ -90,10 +92,18 @@ VirtualBox error:
|
||||
VBoxManage.exe: error: Details: code E_FAIL (0x80004005), component SessionMachine, interface IMachine, callee IUnknown
|
||||
VBoxManage.exe: error: Context: "SaveSettings()" at line 3105 of file VBoxManageModifyVM.cpp
|
||||
|
||||
In such cases delete the VM, then delete teh VM folder and then re-provision
|
||||
In such cases delete the VM, then delete the VM folder and then re-provision
|
||||
|
||||
`vagrant destroy <vm>`
|
||||
|
||||
`rmdir "<path-to-vm-folder>\kubernetes-ha-worker-2"`
|
||||
|
||||
`vagrant up`
|
||||
|
||||
2. When you try "sysctl net.bridge.bridge-nf-call-iptables=1", it would sometimes return "sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory" error. The below would resolve the issue.
|
||||
|
||||
`modprobe br_netfilter`
|
||||
|
||||
`sysctl -p /etc/sysctl.conf`
|
||||
|
||||
`net.bridge.bridge-nf-call-iptables=1`
|
||||
|
@@ -20,6 +20,9 @@ Create a CA certificate, then generate a Certificate Signing Request and use it
|
||||
# Create private key for CA
|
||||
openssl genrsa -out ca.key 2048
|
||||
|
||||
# Comment line starting with RANDFILE in /etc/ssl/openssl.cnf definition to avoid permission issues
|
||||
sudo sed -i '0,/RANDFILE/{s/RANDFILE/\#&/}' /etc/ssl/openssl.cnf
|
||||
|
||||
# Create CSR using the private key
|
||||
openssl req -new -key ca.key -subj "/CN=KUBERNETES-CA" -out ca.csr
|
||||
|
||||
|
@@ -4,7 +4,7 @@ In this lab you will generate [Kubernetes configuration files](https://kubernete
|
||||
|
||||
## Client Authentication Configs
|
||||
|
||||
In this section you will generate kubeconfig files for the `controller manager`, `kubelet`, `kube-proxy`, and `scheduler` clients and the `admin` user.
|
||||
In this section you will generate kubeconfig files for the `controller manager`, `kube-proxy`, `scheduler` clients and the `admin` user.
|
||||
|
||||
### Kubernetes Public IP Address
|
||||
|
||||
@@ -45,10 +45,9 @@ Results:
|
||||
|
||||
```
|
||||
kube-proxy.kubeconfig
|
||||
|
||||
```
|
||||
|
||||
Reference docs for kube-proxy [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/)
|
||||
```
|
||||
|
||||
### The kube-controller-manager Kubernetes Configuration File
|
||||
|
||||
@@ -167,7 +166,7 @@ for instance in worker-1 worker-2; do
|
||||
done
|
||||
```
|
||||
|
||||
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
|
||||
Copy the appropriate `admin.kubeconfig`, `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
|
||||
|
||||
```
|
||||
for instance in master-1 master-2; do
|
||||
|
@@ -39,6 +39,15 @@ for instance in master-1 master-2; do
|
||||
scp encryption-config.yaml ${instance}:~/
|
||||
done
|
||||
```
|
||||
|
||||
Move `encryption-config.yaml` encryption config file to appropriate directory.
|
||||
|
||||
```
|
||||
for instance in master-1 master-2; do
|
||||
ssh ${instance} sudo mv encryption-config.yaml /var/lib/kubernetes/
|
||||
done
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#encrypting-your-data
|
||||
|
||||
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
|
||||
|
@@ -8,7 +8,7 @@ The commands in this lab must be run on each controller instance: `master-1`, an
|
||||
|
||||
### Running commands in parallel with tmux
|
||||
|
||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
|
||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time.
|
||||
|
||||
## Bootstrapping an etcd Cluster Member
|
||||
|
||||
|
@@ -78,7 +78,7 @@ Documentation=https://github.com/kubernetes/kubernetes
|
||||
ExecStart=/usr/local/bin/kube-apiserver \\
|
||||
--advertise-address=${INTERNAL_IP} \\
|
||||
--allow-privileged=true \\
|
||||
--apiserver-count=3 \\
|
||||
--apiserver-count=2 \\
|
||||
--audit-log-maxage=30 \\
|
||||
--audit-log-maxbackup=3 \\
|
||||
--audit-log-maxsize=100 \\
|
||||
@@ -99,7 +99,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\
|
||||
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
|
||||
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
|
||||
--kubelet-https=true \\
|
||||
--runtime-config=api/all \\
|
||||
--runtime-config=api/all=true \\
|
||||
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
|
||||
--service-cluster-ip-range=10.96.0.0/24 \\
|
||||
--service-node-port-range=30000-32767 \\
|
||||
@@ -116,10 +116,10 @@ EOF
|
||||
|
||||
### Configure the Kubernetes Controller Manager
|
||||
|
||||
Move the `kube-controller-manager` kubeconfig into place:
|
||||
Copy the `kube-controller-manager` kubeconfig into place:
|
||||
|
||||
```
|
||||
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
||||
sudo cp kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
||||
```
|
||||
|
||||
Create the `kube-controller-manager.service` systemd unit file:
|
||||
@@ -154,10 +154,10 @@ EOF
|
||||
|
||||
### Configure the Kubernetes Scheduler
|
||||
|
||||
Move the `kube-scheduler` kubeconfig into place:
|
||||
Copy the `kube-scheduler` kubeconfig into place:
|
||||
|
||||
```
|
||||
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
||||
sudo cp kube-scheduler.kubeconfig /var/lib/kubernetes/
|
||||
```
|
||||
|
||||
Create the `kube-scheduler.service` systemd unit file:
|
||||
@@ -218,6 +218,8 @@ In this section you will provision an external load balancer to front the Kubern
|
||||
|
||||
### Provision a Network Load Balancer
|
||||
|
||||
Login to `loadbalancer` instance using SSH Terminal.
|
||||
|
||||
```
|
||||
#Install HAProxy
|
||||
loadbalancer# sudo apt-get update && sudo apt-get install -y haproxy
|
||||
|
@@ -8,7 +8,8 @@ We will now install the kubernetes components
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The commands in this lab must be run on first worker instance: `worker-1`. Login to first worker instance using SSH Terminal.
|
||||
The Certificates and Configuration are created on `master-1` node and then copied over to workers using `scp`.
|
||||
Once this is done, the commands are to be run on first worker instance: `worker-1`. Login to first worker instance using SSH Terminal.
|
||||
|
||||
### Provisioning Kubelet Client Certificates
|
||||
|
||||
@@ -16,7 +17,7 @@ Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/doc
|
||||
|
||||
Generate a certificate and private key for one worker node:
|
||||
|
||||
Worker1:
|
||||
On master-1:
|
||||
|
||||
```
|
||||
master-1$ cat > openssl-worker-1.cnf <<EOF
|
||||
@@ -54,8 +55,9 @@ Get the kub-api server load-balancer IP.
|
||||
LOADBALANCER_ADDRESS=192.168.5.30
|
||||
```
|
||||
|
||||
Generate a kubeconfig file for the first worker node:
|
||||
Generate a kubeconfig file for the first worker node.
|
||||
|
||||
On master-1:
|
||||
```
|
||||
{
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
@@ -86,7 +88,7 @@ worker-1.kubeconfig
|
||||
```
|
||||
|
||||
### Copy certificates, private keys and kubeconfig files to the worker node:
|
||||
|
||||
On master-1:
|
||||
```
|
||||
master-1$ scp ca.crt worker-1.crt worker-1.key worker-1.kubeconfig worker-1:~/
|
||||
```
|
||||
@@ -95,6 +97,7 @@ master-1$ scp ca.crt worker-1.crt worker-1.key worker-1.kubeconfig worker-1:~/
|
||||
|
||||
Going forward all activities are to be done on the `worker-1` node.
|
||||
|
||||
On worker-1:
|
||||
```
|
||||
worker-1$ wget -q --show-progress --https-only --timestamping \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \
|
||||
@@ -126,7 +129,7 @@ Install the worker binaries:
|
||||
```
|
||||
|
||||
### Configure the Kubelet
|
||||
|
||||
On worker-1:
|
||||
```
|
||||
{
|
||||
sudo mv ${HOSTNAME}.key ${HOSTNAME}.crt /var/lib/kubelet/
|
||||
@@ -189,7 +192,7 @@ EOF
|
||||
```
|
||||
|
||||
### Configure the Kubernetes Proxy
|
||||
|
||||
On worker-1:
|
||||
```
|
||||
worker-1$ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
||||
```
|
||||
@@ -227,7 +230,7 @@ EOF
|
||||
```
|
||||
|
||||
### Start the Worker Services
|
||||
|
||||
On worker-1:
|
||||
```
|
||||
{
|
||||
sudo systemctl daemon-reload
|
||||
@@ -239,7 +242,7 @@ EOF
|
||||
> Remember to run the above commands on worker node: `worker-1`
|
||||
|
||||
## Verification
|
||||
|
||||
On master-1:
|
||||
|
||||
List the registered Kubernetes nodes from the master node:
|
||||
|
||||
@@ -257,4 +260,6 @@ worker-1 NotReady <none> 93s v1.13.0
|
||||
> Note: It is OK for the worker node to be in a NotReady state.
|
||||
That is because we haven't configured Networking yet.
|
||||
|
||||
Optional: At this point you may run the certificate verification script to make sure all certificates are configured correctly. Follow the instructions [here](verify-certificates.md)
|
||||
|
||||
Next: [TLS Bootstrapping Kubernetes Workers](10-tls-bootstrapping-kubernetes-workers.md)
|
||||
|
@@ -14,7 +14,11 @@ This is not a practical approach when you have 1000s of nodes in the cluster, an
|
||||
- The Nodes can retrieve the signed certificate from the Kubernetes CA
|
||||
- The Nodes can generate a kube-config file using this certificate by themselves
|
||||
- The Nodes can start and join the cluster by themselves
|
||||
- The Nodes can renew certificates when they expire by themselves
|
||||
- The Nodes can request new certificates via a CSR, but the CSR must be manually approved by a cluster administrator
|
||||
|
||||
In Kubernetes 1.11 a patch was merged to require administrator or Controller approval of node serving CSRs for security reasons.
|
||||
|
||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#certificate-rotation
|
||||
|
||||
So let's get started!
|
||||
|
||||
@@ -39,16 +43,13 @@ So let's get started!
|
||||
|
||||
Copy the ca certificate to the worker node:
|
||||
|
||||
```
|
||||
scp ca.crt worker-2:~/
|
||||
```
|
||||
|
||||
## Step 1 Configure the Binaries on the Worker node
|
||||
|
||||
### Download and Install Worker Binaries
|
||||
|
||||
```
|
||||
wget -q --show-progress --https-only --timestamping \
|
||||
worker-2$ wget -q --show-progress --https-only --timestamping \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy \
|
||||
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubelet
|
||||
@@ -59,7 +60,7 @@ Reference: https://kubernetes.io/docs/setup/release/#node-binaries
|
||||
Create the installation directories:
|
||||
|
||||
```
|
||||
sudo mkdir -p \
|
||||
worker-2$ sudo mkdir -p \
|
||||
/etc/cni/net.d \
|
||||
/opt/cni/bin \
|
||||
/var/lib/kubelet \
|
||||
@@ -78,7 +79,7 @@ Install the worker binaries:
|
||||
```
|
||||
### Move the ca certificate
|
||||
|
||||
`sudo mv ca.crt /var/lib/kubernetes/`
|
||||
`worker-2$ sudo mv ca.crt /var/lib/kubernetes/`
|
||||
|
||||
# Step 1 Create the Boostrap Token to be used by Nodes(Kubelets) to invoke Certificate API
|
||||
|
||||
@@ -86,10 +87,10 @@ For the workers(kubelet) to access the Certificates API, they need to authentica
|
||||
|
||||
Bootstrap Tokens take the form of a 6 character token id followed by 16 character token secret separated by a dot. Eg: abcdef.0123456789abcdef. More formally, they must match the regular expression [a-z0-9]{6}\.[a-z0-9]{16}
|
||||
|
||||
Bootstrap Tokens are created as a secret in the kube-system namespace.
|
||||
|
||||
|
||||
```
|
||||
cat > bootstrap-token-07401b.yaml <<EOF
|
||||
master-1$ cat > bootstrap-token-07401b.yaml <<EOF
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
@@ -119,7 +120,7 @@ stringData:
|
||||
EOF
|
||||
|
||||
|
||||
kubectl create -f bootstrap-token-07401b.yaml
|
||||
master-1$ kubectl create -f bootstrap-token-07401b.yaml
|
||||
|
||||
```
|
||||
|
||||
@@ -136,11 +137,11 @@ Reference: https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tok
|
||||
Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet
|
||||
|
||||
```
|
||||
kubectl create clusterrolebinding create-csrs-for-bootstrapping --clusterrole=system:node-bootstrapper --group=system:bootstrappers
|
||||
master-1$ kubectl create clusterrolebinding create-csrs-for-bootstrapping --clusterrole=system:node-bootstrapper --group=system:bootstrappers
|
||||
|
||||
--------------- OR ---------------
|
||||
|
||||
cat > csrs-for-bootstrapping.yaml <<EOF
|
||||
master-1$ cat > csrs-for-bootstrapping.yaml <<EOF
|
||||
# enable bootstrapping nodes to create CSR
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
@@ -157,18 +158,18 @@ roleRef:
|
||||
EOF
|
||||
|
||||
|
||||
kubectl create -f csrs-for-bootstrapping.yaml
|
||||
master-1$ kubectl create -f csrs-for-bootstrapping.yaml
|
||||
|
||||
```
|
||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#authorize-kubelet-to-create-csr
|
||||
|
||||
## Step 3 Authorize workers(kubelets) to approve CSR
|
||||
```
|
||||
kubectl create clusterrolebinding auto-approve-csrs-for-group --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers
|
||||
master-1$ kubectl create clusterrolebinding auto-approve-csrs-for-group --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --group=system:bootstrappers
|
||||
|
||||
--------------- OR ---------------
|
||||
|
||||
cat > auto-approve-csrs-for-group.yaml <<EOF
|
||||
master-1$ cat > auto-approve-csrs-for-group.yaml <<EOF
|
||||
# Approve all CSRs for the group "system:bootstrappers"
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
@@ -185,7 +186,7 @@ roleRef:
|
||||
EOF
|
||||
|
||||
|
||||
kubectl create -f auto-approve-csrs-for-group.yaml
|
||||
master-1$ kubectl create -f auto-approve-csrs-for-group.yaml
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval
|
||||
@@ -195,11 +196,11 @@ Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kub
|
||||
We now create the Cluster Role Binding required for the nodes to automatically renew the certificates on expiry. Note that we are NOT using the **system:bootstrappers** group here any more. Since by the renewal period, we believe the node would be bootstrapped and part of the cluster already. All nodes are part of the **system:nodes** group.
|
||||
|
||||
```
|
||||
kubectl create clusterrolebinding auto-approve-renewals-for-nodes --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
|
||||
master-1$ kubectl create clusterrolebinding auto-approve-renewals-for-nodes --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
|
||||
|
||||
--------------- OR ---------------
|
||||
|
||||
cat > auto-approve-renewals-for-nodes.yaml <<EOF
|
||||
master-1$ cat > auto-approve-renewals-for-nodes.yaml <<EOF
|
||||
# Approve renewal CSRs for the group "system:nodes"
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
@@ -216,7 +217,7 @@ roleRef:
|
||||
EOF
|
||||
|
||||
|
||||
kubectl create -f auto-approve-renewals-for-nodes.yaml
|
||||
master-1$ kubectl create -f auto-approve-renewals-for-nodes.yaml
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#approval
|
||||
@@ -231,7 +232,7 @@ Here, we don't have the certificates yet. So we cannot create a kubeconfig file.
|
||||
This is to be done on the `worker-2` node.
|
||||
|
||||
```
|
||||
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://192.168.5.30:6443' --certificate-authority=/var/lib/kubernetes/ca.crt
|
||||
worker-2$ sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-cluster bootstrap --server='https://192.168.5.30:6443' --certificate-authority=/var/lib/kubernetes/ca.crt
|
||||
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d
|
||||
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap
|
||||
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-context bootstrap
|
||||
@@ -240,7 +241,7 @@ sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig use-conte
|
||||
Or
|
||||
|
||||
```
|
||||
cat <<EOF | sudo tee /var/lib/kubelet/bootstrap-kubeconfig
|
||||
worker-2$ cat <<EOF | sudo tee /var/lib/kubelet/bootstrap-kubeconfig
|
||||
apiVersion: v1
|
||||
clusters:
|
||||
- cluster:
|
||||
@@ -269,7 +270,7 @@ Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kub
|
||||
Create the `kubelet-config.yaml` configuration file:
|
||||
|
||||
```
|
||||
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
||||
worker-2$ cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
|
||||
kind: KubeletConfiguration
|
||||
apiVersion: kubelet.config.k8s.io/v1beta1
|
||||
authentication:
|
||||
@@ -296,7 +297,7 @@ EOF
|
||||
Create the `kubelet.service` systemd unit file:
|
||||
|
||||
```
|
||||
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
||||
worker-2$ cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
|
||||
[Unit]
|
||||
Description=Kubernetes Kubelet
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
@@ -311,7 +312,6 @@ ExecStart=/usr/local/bin/kubelet \\
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
||||
--cert-dir=/var/lib/kubelet/pki/ \\
|
||||
--rotate-certificates=true \\
|
||||
--rotate-server-certificates=true \\
|
||||
--network-plugin=cni \\
|
||||
--register-node=true \\
|
||||
--v=2
|
||||
@@ -327,18 +327,19 @@ Things to note here:
|
||||
- **bootstrap-kubeconfig**: Location of the bootstrap-kubeconfig file.
|
||||
- **cert-dir**: The directory where the generated certificates are stored.
|
||||
- **rotate-certificates**: Rotates client certificates when they expire.
|
||||
- **rotate-server-certificates**: Requests for server certificates on bootstrap and rotates them when they expire.
|
||||
|
||||
## Step 7 Configure the Kubernetes Proxy
|
||||
|
||||
In one of the previous steps we created the kube-proxy.kubeconfig file. Check [here](https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/05-kubernetes-configuration-files.md) if you missed it.
|
||||
|
||||
```
|
||||
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
||||
worker-2$ sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
|
||||
```
|
||||
|
||||
Create the `kube-proxy-config.yaml` configuration file:
|
||||
|
||||
```
|
||||
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
|
||||
worker-2$ cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
|
||||
kind: KubeProxyConfiguration
|
||||
apiVersion: kubeproxy.config.k8s.io/v1alpha1
|
||||
clientConnection:
|
||||
@@ -351,7 +352,7 @@ EOF
|
||||
Create the `kube-proxy.service` systemd unit file:
|
||||
|
||||
```
|
||||
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
|
||||
worker-2$ cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
|
||||
[Unit]
|
||||
Description=Kubernetes Kube Proxy
|
||||
Documentation=https://github.com/kubernetes/kubernetes
|
||||
@@ -369,6 +370,8 @@ EOF
|
||||
|
||||
## Step 8 Start the Worker Services
|
||||
|
||||
On worker-2:
|
||||
|
||||
```
|
||||
{
|
||||
sudo systemctl daemon-reload
|
||||
@@ -381,7 +384,7 @@ EOF
|
||||
|
||||
## Step 9 Approve Server CSR
|
||||
|
||||
`kubectl get csr`
|
||||
`master-1$ kubectl get csr`
|
||||
|
||||
```
|
||||
NAME AGE REQUESTOR CONDITION
|
||||
@@ -391,7 +394,9 @@ csr-95bv6 20s system:node:worker-
|
||||
|
||||
Approve
|
||||
|
||||
`kubectl certificate approve csr-95bv6`
|
||||
`master-1$ kubectl certificate approve csr-95bv6`
|
||||
|
||||
Note: In the event your cluster persists for longer than 365 days, you will need to manually approve the replacement CSR.
|
||||
|
||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubectl-approval
|
||||
|
||||
|
@@ -29,9 +29,9 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
||||
|
||||
kubectl config use-context kubernetes-the-hard-way
|
||||
}
|
||||
```
|
||||
|
||||
Reference doc for kubectl config [here](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
|
@@ -32,9 +32,9 @@ EOF
|
||||
```
|
||||
Reference: https://v1-12.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole
|
||||
|
||||
The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
|
||||
The Kubernetes API Server authenticates to the Kubelet as the `system:kube-apiserver` user using the client certificate as defined by the `--kubelet-client-certificate` flag.
|
||||
|
||||
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
|
||||
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `system:kube-apiserver` user:
|
||||
|
||||
```
|
||||
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
|
||||
@@ -50,9 +50,9 @@ roleRef:
|
||||
subjects:
|
||||
- apiGroup: rbac.authorization.k8s.io
|
||||
kind: User
|
||||
name: kube-apiserver
|
||||
name: system:kube-apiserver
|
||||
EOF
|
||||
```
|
||||
Reference: https://v1-12.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
|
||||
Reference: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
|
||||
|
||||
Next: [DNS Addon](14-dns-addon.md)
|
||||
|
@@ -3,9 +3,9 @@
|
||||
Install Go
|
||||
|
||||
```
|
||||
wget https://dl.google.com/go/go1.12.1.linux-amd64.tar.gz
|
||||
wget https://dl.google.com/go/go1.15.linux-amd64.tar.gz
|
||||
|
||||
sudo tar -C /usr/local -xzf go1.12.1.linux-amd64.tar.gz
|
||||
sudo tar -C /usr/local -xzf go1.15.linux-amd64.tar.gz
|
||||
export GOPATH="/home/vagrant/go"
|
||||
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
|
||||
```
|
||||
@@ -13,23 +13,23 @@ export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
|
||||
## Install kubetest
|
||||
|
||||
```
|
||||
go get -v -u k8s.io/test-infra/kubetest
|
||||
git clone https://github.com/kubernetes/test-infra.git
|
||||
cd test-infra/
|
||||
GO111MODULE=on go install ./kubetest
|
||||
```
|
||||
|
||||
> Note: This may take a few minutes depending on your network speed
|
||||
|
||||
## Extract the Version
|
||||
## Use the version specific to your cluster
|
||||
|
||||
```
|
||||
kubetest --extract=v1.13.0
|
||||
K8S_VERSION=$(kubectl version -o json | jq -r '.serverVersion.gitVersion')
|
||||
export KUBERNETES_CONFORMANCE_TEST=y
|
||||
export KUBECONFIG="$HOME/.kube/config"
|
||||
|
||||
cd kubernetes
|
||||
|
||||
export KUBE_MASTER_IP="192.168.5.11:6443"
|
||||
|
||||
export KUBE_MASTER=master-1
|
||||
|
||||
kubetest --test --provider=skeleton --test_args="--ginkgo.focus=\[Conformance\]" | tee test.out
|
||||
kubetest --provider=skeleton --test --test_args=”--ginkgo.focus=\[Conformance\]” --extract ${K8S_VERSION} | tee test.out
|
||||
|
||||
```
|
||||
|
||||
|
@@ -11,9 +11,18 @@ NODE_NAME="worker-1"; NODE_NAME="worker-1"; curl -sSL "https://localhost:6443/ap
|
||||
kubectl -n kube-system create configmap nodes-config --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml
|
||||
```
|
||||
|
||||
Edit node to use the dynamically created configuration
|
||||
Edit `worker-1` node to use the dynamically created configuration
|
||||
```
|
||||
kubectl edit worker-2
|
||||
master-1# kubectl edit node worker-1
|
||||
```
|
||||
|
||||
Add the following YAML bit under `spec`:
|
||||
```
|
||||
configSource:
|
||||
configMap:
|
||||
name: CONFIG_MAP_NAME # replace CONFIG_MAP_NAME with the name of the ConfigMap
|
||||
namespace: kube-system
|
||||
kubeletConfigKey: kubelet
|
||||
```
|
||||
|
||||
Configure Kubelet Service
|
||||
@@ -45,3 +54,5 @@ RestartSec=5
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/
|
||||
|
BIN
docs/images/master-1-cert.png
Normal file
BIN
docs/images/master-1-cert.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 100 KiB |
BIN
docs/images/master-2-cert.png
Normal file
BIN
docs/images/master-2-cert.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 75 KiB |
BIN
docs/images/worker-1-cert.png
Normal file
BIN
docs/images/worker-1-cert.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 44 KiB |
29
docs/verify-certificates.md
Normal file
29
docs/verify-certificates.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Verify Certificates in Master-1/2 & Worker-1
|
||||
|
||||
> Note: This script is only intended to work with a kubernetes cluster setup following instructions from this repository. It is not a generic script that works for all kubernetes clusters. Feel free to send in PRs with improvements.
|
||||
|
||||
This script was developed to assist the verification of certificates for each Kubernetes component as part of building the cluster. This script may be executed as soon as you have completed the Lab steps up to [Bootstrapping the Kubernetes Worker Nodes](./09-bootstrapping-kubernetes-workers.md). The script is named as `cert_verify.sh` and it is available at `/home/vagrant` directory of master-1 , master-2 and worker-1 nodes. If it's not already available there copy the script to the nodes from [here](../vagrant/ubuntu/cert_verify.sh).
|
||||
|
||||
It is important that the script execution needs to be done by following commands after logging into the respective virtual machines [ whether it is master-1 / master-2 / worker-1 ] via SSH.
|
||||
|
||||
```bash
|
||||
cd /home/vagrant
|
||||
bash cert_verify.sh
|
||||
```
|
||||
|
||||
Following are the successful output of script execution under different nodes,
|
||||
|
||||
1. VM: Master-1
|
||||
|
||||

|
||||
|
||||
2. VM: Master-2
|
||||
|
||||

|
||||
|
||||
3. VM: Worker-1
|
||||
|
||||

|
||||
|
||||
Any misconfiguration in certificates will be reported in red.
|
||||
|
Reference in New Issue
Block a user