chg: Make Command Shorter

In an effort to have less to remember, the commands have been shortened
were possible. In some cases they have been moved around to organize
them so they are easier to remember.
pull/882/head
Khalifah Shabazz 2025-06-11 18:06:04 -04:00
parent b4136eb578
commit e5f54442c2
No known key found for this signature in database
GPG Key ID: 762A588BFB5A40ED
6 changed files with 95 additions and 102 deletions

View File

@ -143,7 +143,7 @@ Make the binaries executable.
```bash
{
chmod +x downloads/{client,cni-plugins,controller,worker}/*
chmod +x -R downloads
}
```

View File

@ -36,6 +36,7 @@ Create the Kubernetes configuration directory:
```bash
sudo mkdir -p /etc/kubernetes/config
sudo mkdir -p /var/lib/kubernetes
```
### Install the Kubernetes Controller Binaries
@ -55,8 +56,6 @@ Install the Kubernetes binaries:
```bash
{
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.crt ca.key \
kube-apiserver.key kube-apiserver.crt \
service-accounts.key service-accounts.crt \
@ -65,48 +64,35 @@ Install the Kubernetes binaries:
}
```
Create the `kube-apiserver.service` systemd unit file:
Install the systemd service unit files for `kube-apiserver.service`,
`kube-controller-manager.service`, and `kube-scheduler.service`:
```bash
sudo mv kube-apiserver.service \
/etc/systemd/system/kube-apiserver.service
sudo mv kube-*.service /etc/systemd/system
```
### Configure the Kubernetes Controller Manager
### Configurations Kubernetes Cluster Components
Move the `kube-controller-manager` kubeconfig into place:
Install the `kube-controller-manager` and `kube-scheduler` kubeconfigs:
```bash
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
sudo mv kube-*.kubeconfig /var/lib/kubernetes/
```
Create the `kube-controller-manager.service` systemd unit file:
```bash
sudo mv kube-controller-manager.service /etc/systemd/system/
```
### Configure the Kubernetes Scheduler
Move the `kube-scheduler` kubeconfig into place:
This will set up the static pod scheduler.
```bash
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
```
Create the `kube-scheduler.yaml` configuration file:
Install the `kube-scheduler.yaml` configuration file:
```bash
sudo mv kube-scheduler.yaml /etc/kubernetes/config/
```
Create the `kube-scheduler.service` systemd unit file:
### Start the Control Plane Components
```bash
sudo mv kube-scheduler.service /etc/systemd/system/
```
### Start the Controller Services
These components have been installed as standalone services managed by systemd.
```bash
{

View File

@ -12,36 +12,50 @@ Copy the Kubernetes binaries and systemd unit files to each worker instance:
```bash
for HOST in node01 node02; do
# Grab the subnet CIDR block from the machines database, if you want to use
SUBNET=$(grep ${HOST} machines.txt | cut -d " " -f 4)
sed "s|SUBNET|$SUBNET|g" \
configs/10-bridge.conf > 10-bridge.conf
scp 10-bridge.conf configs/kubelet-config.yaml \
vagrant@${HOST}:~/
done
```
# For each machine set its subnet in the CNI config file
sed "s|SUBNET|${SUBNET}|g" \
configs/11-crio-ipv4-bridge.conflist > 11-crio-ipv4-bridge.conflist
```bash
for HOST in node01 node02; do
# Copy the CNI network plugin config over
scp 11-crio-ipv4-bridge.conflist vagrant@${HOST}:~/
# Copy binaries over
scp \
downloads/worker/* \
downloads/client/kubectl \
configs/99-loopback.conf \
configs/containerd-config.toml \
configs/kube-proxy-config.yaml \
configs/kubelet-config.yaml \
units/containerd.service \
units/kubelet.service \
units/kube-proxy.service \
downloads/cni-plugins/ \
11-crio-ipv4-bridge.conflist \
vagrant@${HOST}:~/
# Copy CNI plugins directory over
scp -r \
downloads/cni-plugins/ \
vagrant@${HOST}:~/
done
```
Create the installation directories:
```bash
for HOST in node01 node02; do
scp -r \
downloads/cni-plugins/ \
vagrant@${HOST}:~/cni-plugins/
for HOST in node01 node02
ssh vagrant@${HOST} sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes \
/etc/containerd
done
```
@ -87,18 +101,6 @@ swapoff -a
> To ensure swap remains off after reboot consult your Linux distro
> documentation.
Create the installation directories:
```bash
sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes
```
Install the worker binaries:
```bash
@ -115,7 +117,7 @@ Install the worker binaries:
Create the `bridge` network configuration file:
```bash
sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
sudo mv 11-crio-ipv4-bridge.conflist 99-loopback.conf /etc/cni/net.d/
```
To ensure network traffic crossing the CNI `bridge` network is processed by
@ -128,43 +130,30 @@ To ensure network traffic crossing the CNI `bridge` network is processed by
}
```
Enable for IPv4 and IPv6 (a.k.a dual-stack), then load (with `sysctl -p`) in
sysctl settings from the file specified.
```bash
{
echo "net.bridge.bridge-nf-call-iptables = 1" | sudo tee -a /etc/sysctl.d/kubernetes.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" | sudo tee -a /etc/sysctl.d/kubernetes.conf
# Load in sysctl settings from the file specified
sudo sysctl -p /etc/sysctl.d/kubernetes.conf
}
```
### Configure containerd
### Configure containerd, Kubelet, and the Kubernetes Proxy
Install the `containerd` configuration files:
Install the configuration files:
```bash
{
sudo mkdir -p /etc/containerd/
sudo mv containerd-config.toml /etc/containerd/config.toml
sudo mv containerd.service /etc/systemd/system/
}
```
### Configure the Kubelet
Create the `kubelet-config.yaml` configuration file:
```bash
{
sudo mv kubelet-config.yaml /var/lib/kubelet/
sudo mv kubelet.service /etc/systemd/system/
}
```
### Configure the Kubernetes Proxy
```bash
{
sudo mv kube-proxy-config.yaml /var/lib/kube-proxy/
sudo mv kube-proxy.service /etc/systemd/system/
sudo mv containerd.service kubelet.service kube-proxy.service \
/etc/systemd/system/
}
```
@ -223,6 +212,10 @@ node01 Ready <none> 2m5s v1.33.1
node02 Ready <none> 2m12s v1.33.1
```
NOTE: For extra credit, see if you can also turn the controlplnae into a
worker node that can host PODs. Hint: you need to give it a subnet such as
`10.200.2.0/24` in the machines.txt
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
---

View File

@ -8,6 +8,8 @@ Quick access to information to help you when you run into trouble.
* [Running Kubelet in Standalone Mode]
* [Using RBAC Authorization]
* [Using Node Authorization]
* [Install network plugins]
* [CRI-O CNI configuration] installing either 10-crio-bridge.conflist or 11-crio-ipv4-bridge.conflist.
---
@ -17,3 +19,5 @@ Quick access to information to help you when you run into trouble.
[Generate Certificates Manually with OpenSSL]: https://v1-32.docs.kubernetes.io/docs/tasks/administer-cluster/certificates/#openssl
[Using RBAC Authorization]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
[Using Node Authorization]: https://kubernetes.io/docs/reference/access-authn-authz/node/
[Install network plugins]: https://v1-32.docs.kubernetes.io/docs/tutorials/cluster-management/kubelet-standalone/#install-network-plugins
[CRI-O CNI configuration]: https://github.com/cri-o/cri-o/blob/main/contrib/cni/README.md

View File

@ -12,6 +12,9 @@
INSTALL_MODE = "MANUAL"
BOX_IMG = "ubuntu/jammy64"
#BOX_IMG = "ubuntu/lts-noble-64"
#BASE_MAC = "080027AA5560"
BOOT_TIMEOUT_SEC = 120
# Set the build mode
@ -106,7 +109,7 @@ def setup_dns(node)
if INSTALL_MODE == "KUBEADM"
# Set up /etc/hosts
node.vm.provision "setup-hosts", :type => "shell", :path => "ubuntu/vagrant/setup-hosts.sh" do |s|
s.args = [NAT_IP_PREFIX, BUILD_MODE, NUM_WORKER_NODES, CONTROLPLANE_NAT_IP, NODE_IP_START]
s.args = [NAT_IP_PREFIX, BUILD_MODE, NUM_WORKER_NODES, CONTROLPLANE_NAT_IP, NODE_IP_START, INSTALL_MODE]
end
end
# Set up DNS resolution
@ -142,6 +145,29 @@ Vagrant.configure("2") do |config|
# `vagrant box outdated`. This is not recommended.
config.vm.box_check_update = false
#config.vm.base_mac = BASE_MAC
# Provision a jumpbox
if INSTALL_MODE == "MANUAL"
# Provision a JumpBox
config.vm.define JUMPER_NAME do |node|
# Name shown in the GUI
node.vm.provider "virtualbox" do |vb|
vb.name = JUMPER_NAME
vb.memory = 512
vb.cpus = 1
end
node.vm.hostname = JUMPER_NAME
if BUILD_MODE == "BRIDGE"
adapter = ""
node.vm.network :public_network, bridge: get_bridge_adapter()
else
node.vm.network :private_network, ip: NAT_IP_PREFIX + ".#{JUMPER_NAT_START_IP}"
end
provision_kubernetes_node node
end
end
# Provision controlplane Nodes
config.vm.define CONTROLPLANE_NAME do |node|
# Name shown in the GUI
@ -181,26 +207,6 @@ Vagrant.configure("2") do |config|
end
end
if INSTALL_MODE == "MANUAL"
# Provision a JumpBox
config.vm.define JUMPER_NAME do |node|
# Name shown in the GUI
node.vm.provider "virtualbox" do |vb|
vb.name = JUMPER_NAME
vb.memory = 512
vb.cpus = 1
end
node.vm.hostname = JUMPER_NAME
if BUILD_MODE == "BRIDGE"
adapter = ""
node.vm.network :public_network, bridge: get_bridge_adapter()
else
node.vm.network :private_network, ip: NAT_IP_PREFIX + ".#{JUMPER_NAT_START_IP}"
end
provision_kubernetes_node node
end
end
if BUILD_MODE == "BRIDGE"
# Trigger that fires after each VM starts.
# Does nothing until all the VMs have started, at which point it

View File

@ -9,6 +9,7 @@ BUILD_MODE=$2
NUM_WORKER_NODES=$3
MASTER_IP_START=$4
NODE_IP_START=$5
INSTALL_MODE=$6
if [ "$BUILD_MODE" = "BRIDGE" ]
then
@ -35,6 +36,7 @@ fi
# Remove unwanted entries
sed -e '/^.*ubuntu-jammy.*/d' -i /etc/hosts
#sed -e '/^.*ubuntu-noble.*/d' -i /etc/hosts
sed -e "/^.*${HOSTNAME}.*/d" -i /etc/hosts
# Export PRIMARY IP as an environment variable
@ -42,11 +44,13 @@ echo "PRIMARY_IP=${MY_IP}" >> /etc/environment
[ "$BUILD_MODE" = "BRIDGE" ] && exit 0
# Update /etc/hosts about other hosts (NAT mode)
echo "${MY_NETWORK}.${MASTER_IP_START} controlplane" >> /etc//hosts
for i in $(seq 1 $NUM_WORKER_NODES)
do
num=$(( $NODE_IP_START + $i ))
echo "${MY_NETWORK}.${num} node0${i}" >> /etc//hosts
done
if [ "${INSTALL_MODE}" = "KUBEADM" ]
then
# Update /etc/hosts about other hosts (NAT mode)
echo "${MY_NETWORK}.${MASTER_IP_START} controlplane" >> /etc/hosts
for i in $(seq 1 $NUM_WORKER_NODES)
do
num=$(( $NODE_IP_START + $i ))
echo "${MY_NETWORK}.${num} node0${i}" >> /etc/hosts
done
fi