mirror of
https://github.com/kelseyhightower/kubernetes-the-hard-way.git
synced 2025-08-08 20:02:42 +03:00
Merge branch 'master' into patch-1
This commit is contained in:
@@ -2,7 +2,7 @@
|
||||
|
||||
## VM Hardware Requirements
|
||||
|
||||
8 GB of RAM (Preferebly 16 GB)
|
||||
8 GB of RAM (Preferably 16 GB)
|
||||
50 GB Disk space
|
||||
|
||||
## Virtual Box
|
||||
@@ -26,4 +26,4 @@ Download and Install [Vagrant](https://www.vagrantup.com/) on your platform.
|
||||
- Centos
|
||||
- Linux
|
||||
- macOS
|
||||
- Arch Linux
|
||||
- Arch Linux
|
||||
|
@@ -73,7 +73,7 @@ Vagrant generates a private key for each of these VMs. It is placed under the .v
|
||||
|
||||
## Troubleshooting Tips
|
||||
|
||||
If any of the VMs failed to provision, or is not configured correct, delete the vm using the command:
|
||||
1. If any of the VMs failed to provision, or is not configured correct, delete the vm using the command:
|
||||
|
||||
`vagrant destroy <vm>`
|
||||
|
||||
@@ -97,3 +97,11 @@ In such cases delete the VM, then delete the VM folder and then re-provision
|
||||
`rmdir "<path-to-vm-folder>\kubernetes-ha-worker-2"`
|
||||
|
||||
`vagrant up`
|
||||
|
||||
2. When you try "sysctl net.bridge.bridge-nf-call-iptables=1", it would sometimes return "sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory" error. The below would resolve the issue.
|
||||
|
||||
`modprobe br_netfilter`
|
||||
|
||||
`sysctl -p /etc/sysctl.conf`
|
||||
|
||||
`net.bridge.bridge-nf-call-iptables=1`
|
||||
|
@@ -45,10 +45,9 @@ Results:
|
||||
|
||||
```
|
||||
kube-proxy.kubeconfig
|
||||
|
||||
```
|
||||
|
||||
Reference docs for kube-proxy [here](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/)
|
||||
```
|
||||
|
||||
### The kube-controller-manager Kubernetes Configuration File
|
||||
|
||||
|
@@ -8,7 +8,7 @@ The commands in this lab must be run on each controller instance: `master-1`, an
|
||||
|
||||
### Running commands in parallel with tmux
|
||||
|
||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
|
||||
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time.
|
||||
|
||||
## Bootstrapping an etcd Cluster Member
|
||||
|
||||
|
@@ -78,7 +78,7 @@ Documentation=https://github.com/kubernetes/kubernetes
|
||||
ExecStart=/usr/local/bin/kube-apiserver \\
|
||||
--advertise-address=${INTERNAL_IP} \\
|
||||
--allow-privileged=true \\
|
||||
--apiserver-count=3 \\
|
||||
--apiserver-count=2 \\
|
||||
--audit-log-maxage=30 \\
|
||||
--audit-log-maxbackup=3 \\
|
||||
--audit-log-maxsize=100 \\
|
||||
@@ -99,7 +99,7 @@ ExecStart=/usr/local/bin/kube-apiserver \\
|
||||
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
|
||||
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
|
||||
--kubelet-https=true \\
|
||||
--runtime-config=api/all \\
|
||||
--runtime-config=api/all=true \\
|
||||
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
|
||||
--service-cluster-ip-range=10.96.0.0/24 \\
|
||||
--service-node-port-range=30000-32767 \\
|
||||
|
@@ -14,7 +14,11 @@ This is not a practical approach when you have 1000s of nodes in the cluster, an
|
||||
- The Nodes can retrieve the signed certificate from the Kubernetes CA
|
||||
- The Nodes can generate a kube-config file using this certificate by themselves
|
||||
- The Nodes can start and join the cluster by themselves
|
||||
- The Nodes can renew certificates when they expire by themselves
|
||||
- The Nodes can request new certificates via a CSR, but the CSR must be manually approved by a cluster administrator
|
||||
|
||||
In Kubernetes 1.11 a patch was merged to require administrator or Controller approval of node serving CSRs for security reasons.
|
||||
|
||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#certificate-rotation
|
||||
|
||||
So let's get started!
|
||||
|
||||
@@ -86,7 +90,7 @@ For the workers(kubelet) to access the Certificates API, they need to authentica
|
||||
|
||||
Bootstrap Tokens take the form of a 6 character token id followed by 16 character token secret separated by a dot. Eg: abcdef.0123456789abcdef. More formally, they must match the regular expression [a-z0-9]{6}\.[a-z0-9]{16}
|
||||
|
||||
Bootstrap Tokens are created as a secret in the kube-system namespace on the master node.
|
||||
|
||||
|
||||
```
|
||||
master-1$ cat > bootstrap-token-07401b.yaml <<EOF
|
||||
@@ -311,7 +315,6 @@ ExecStart=/usr/local/bin/kubelet \\
|
||||
--kubeconfig=/var/lib/kubelet/kubeconfig \\
|
||||
--cert-dir=/var/lib/kubelet/pki/ \\
|
||||
--rotate-certificates=true \\
|
||||
--rotate-server-certificates=true \\
|
||||
--network-plugin=cni \\
|
||||
--register-node=true \\
|
||||
--v=2
|
||||
@@ -327,7 +330,6 @@ Things to note here:
|
||||
- **bootstrap-kubeconfig**: Location of the bootstrap-kubeconfig file.
|
||||
- **cert-dir**: The directory where the generated certificates are stored.
|
||||
- **rotate-certificates**: Rotates client certificates when they expire.
|
||||
- **rotate-server-certificates**: Requests for server certificates on bootstrap and rotates them when they expire.
|
||||
|
||||
## Step 7 Configure the Kubernetes Proxy
|
||||
|
||||
@@ -397,6 +399,8 @@ Approve
|
||||
|
||||
`master-1$ kubectl certificate approve csr-95bv6`
|
||||
|
||||
Note: In the event your cluster persists for longer than 365 days, you will need to manually approve the replacement CSR.
|
||||
|
||||
Reference: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/#kubectl-approval
|
||||
|
||||
## Verification
|
||||
|
@@ -29,9 +29,9 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
|
||||
|
||||
kubectl config use-context kubernetes-the-hard-way
|
||||
}
|
||||
```
|
||||
|
||||
Reference doc for kubectl config [here](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/)
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
|
@@ -53,6 +53,6 @@ subjects:
|
||||
name: kube-apiserver
|
||||
EOF
|
||||
```
|
||||
Reference: https://v1-12.docs.kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
|
||||
Reference: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding
|
||||
|
||||
Next: [DNS Addon](14-dns-addon.md)
|
||||
|
@@ -3,9 +3,9 @@
|
||||
Install Go
|
||||
|
||||
```
|
||||
wget https://dl.google.com/go/go1.12.1.linux-amd64.tar.gz
|
||||
wget https://dl.google.com/go/go1.15.linux-amd64.tar.gz
|
||||
|
||||
sudo tar -C /usr/local -xzf go1.12.1.linux-amd64.tar.gz
|
||||
sudo tar -C /usr/local -xzf go1.15.linux-amd64.tar.gz
|
||||
export GOPATH="/home/vagrant/go"
|
||||
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
|
||||
```
|
||||
|
@@ -11,9 +11,18 @@ NODE_NAME="worker-1"; NODE_NAME="worker-1"; curl -sSL "https://localhost:6443/ap
|
||||
kubectl -n kube-system create configmap nodes-config --from-file=kubelet=kubelet_configz_${NODE_NAME} --append-hash -o yaml
|
||||
```
|
||||
|
||||
Edit node to use the dynamically created configuration
|
||||
Edit `worker-1` node to use the dynamically created configuration
|
||||
```
|
||||
kubectl edit worker-2
|
||||
master-1# kubectl edit node worker-1
|
||||
```
|
||||
|
||||
Add the following YAML bit under `spec`:
|
||||
```
|
||||
configSource:
|
||||
configMap:
|
||||
name: CONFIG_MAP_NAME # replace CONFIG_MAP_NAME with the name of the ConfigMap
|
||||
namespace: kube-system
|
||||
kubeletConfigKey: kubelet
|
||||
```
|
||||
|
||||
Configure Kubelet Service
|
||||
@@ -45,3 +54,5 @@ RestartSec=5
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
```
|
||||
|
||||
Reference: https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/
|
||||
|
Reference in New Issue
Block a user