chg: Hostnames In Documentation Continued

Updated command that require sudo when running as vagrant user.
pull/882/head
Khalifah Shabazz 2025-06-03 09:32:11 -04:00
parent 58f1cdc411
commit fe76f494fb
No known key found for this signature in database
GPG Key ID: 762A588BFB5A40ED
5 changed files with 40 additions and 49 deletions

39
.gitignore vendored
View File

@ -3,16 +3,13 @@
.idea/
.vagrant/
admin-csr.json
admin-key.pem
admin.key
admin.csr
admin.pem
admin.crt
admin.kubeconfig
ca-config.json
ca-csr.json
ca-key.pem
ca.key
ca.csr
ca.pem
ca.crt
/encryption-config.yaml
kube-controller-manager-csr.json
kube-controller-manager-key.pem
@ -29,27 +26,11 @@ kube-proxy-key.pem
kube-proxy.csr
kube-proxy.kubeconfig
kube-proxy.pem
kubernetes-csr.json
kubernetes-key.pem
kubernetes.csr
kubernetes.pem
worker-0-csr.json
worker-0-key.pem
worker-0.csr
worker-0.kubeconfig
worker-0.pem
worker-1-csr.json
worker-1-key.pem
worker-1.csr
worker-1.kubeconfig
worker-1.pem
worker-2-csr.json
worker-2-key.pem
worker-2.csr
worker-2.kubeconfig
worker-2.pem
service-account-key.pem
node02.key
node02.csr
node02.kubeconfig
node02.crt
service-account.key
service-account.csr
service-account.pem
service-account-csr.json
service-account.crt
*.swp

View File

@ -35,7 +35,7 @@ This tutorial requires four (4) ARM64 or AMD64 based virtual or physical machine
* [Generating Kubernetes Configuration Files for Authentication](docs/05-kubernetes-configuration-files.md)
* [Generating the Data Encryption Config and Key](docs/06-data-encryption-keys.md)
* [Bootstrapping the etcd Cluster](docs/07-bootstrapping-etcd.md)
* [Bootstrapping the Kubernetes Control Plane](docs/08-bootstrapping-kubernetes-controllers.md)
* [Bootstrapping the Kubernetes Control Plane](docs/08-bootstrapping-kubernetes-controlplane)
* [Bootstrapping the Kubernetes Worker Nodes](docs/09-bootstrapping-kubernetes-workers.md)
* [Configuring kubectl for Remote Access](docs/10-configuring-kubectl.md)
* [Provisioning Pod Network Routes](docs/11-pod-network-routes.md)

View File

@ -76,7 +76,7 @@ etcdctl member list
6702b0a34e2cfd39, started, controller, http://127.0.0.1:2380, http://127.0.0.1:2379, false
```
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)
Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controlplane)
---

View File

@ -122,30 +122,35 @@ sudo mv kube-scheduler.service /etc/systemd/system/
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
You can check if any of the control plane components are active using the `systemctl` command. For example, to check if the `kube-apiserver` fully initialized, and active, run the following command:
You can check if any of the control plane components are active using the
`systemctl` command. For example, to check if the `kube-apiserver` is fully
initialized, and active, run the following command:
```bash
systemctl is-active kube-apiserver
```
For a more detailed status check, which includes additional process information and log messages, use the `systemctl status` command:
For a more detailed status check, which includes additional process information
and log messages, use the `systemctl status` command:
```bash
sudo systemctl status kube-apiserver
sudo systemctl status kube-controller-manager
sudo systemctl status kube-scheduler
```
If you run into any errors, or want to view the logs for any of the control plane components, use the `journalctl` command. For example, to view the logs for the `kube-apiserver` run the following command:
If you run into any errors, or want to view the logs for any of the control
plane components, use the `journalctl` command. For example, to view the logs
for the `kube-apiserver` run the following command:
```bash
journalctl -u kube-apiserver
sudo journalctl -u kube-apiserver
```
### Verification
At this point the Kubernetes control plane components should be up and running. Verify this using the `kubectl` command line tool:
At this point the Kubernetes control plane components should be up and running.
Verify this using the `kubectl` command line tool:
```bash
kubectl cluster-info \
@ -158,17 +163,23 @@ Kubernetes control plane is running at https://127.0.0.1:6443
## RBAC for Kubelet Authorization
In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
In this section you will configure RBAC permissions to allow the Kubernetes API
Server to access the Kubelet API on each worker node. Access to the Kubelet API
is required for retrieving metrics, logs, and executing commands in pods.
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#checking-api-access) API to determine authorization.
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`.
> Webhook mode uses the [SubjectAccessReview] API to determine authorization.
The commands in this section will affect the entire cluster and only need to be run on the `controlplane` machine.
The commands in this section will affect the entire cluster and only need to be
run on the `controlplane` machine.
```bash
ssh root@controlplane
```
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
Create the `system:kube-apiserver-to-kubelet` [ClusterRole] with permissions
to access the Kubelet API and perform most common tasks associated with
managing pods:
```bash
kubectl apply -f kube-apiserver-to-kubelet.yaml \
@ -177,7 +188,8 @@ kubectl apply -f kube-apiserver-to-kubelet.yaml \
### Verification
At this point the Kubernetes control plane is up and running. Run the following commands from the `jumpbox` machine to verify it's working:
At this point the Kubernetes control plane is up and running. Run the following
commands from the `jumpbox` machine to verify it's working:
Make an HTTP request for the Kubernetes version info:
@ -201,3 +213,8 @@ curl --cacert ca.crt \
```
Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md)
---
[SubjectAccessReview]: https://kubernetes.io/docs/reference/access-authn-authz/authorization/#checking-api-access
[ClusterRole]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole

View File

@ -137,10 +137,6 @@ Vagrant.configure("2") do |config|
config.vm.box = BOX_IMG
config.vm.boot_timeout = BOOT_TIMEOUT_SEC
# Set SSH login user and password
config.ssh.username = "root"
config.ssh.password = "vagrant"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
@ -160,7 +156,6 @@ Vagrant.configure("2") do |config|
node.vm.network :public_network, bridge: get_bridge_adapter()
else
node.vm.network :private_network, ip: NAT_IP_PREFIX + ".#{CONTROLPLANE_NAT_IP}"
#node.vm.network "forwarded_port", guest: 22, host: "#{2710}"
end
provision_kubernetes_node node
# Install (opinionated) configs for vim and tmux on master-1. These used by the author for CKA exam.
@ -181,7 +176,6 @@ Vagrant.configure("2") do |config|
node.vm.network :public_network, bridge: get_bridge_adapter()
else
node.vm.network :private_network, ip: NAT_IP_PREFIX + ".#{NODE_IP_START + i}"
#node.vm.network "forwarded_port", guest: 22, host: "#{2720 + i}"
end
provision_kubernetes_node node
end
@ -202,7 +196,6 @@ Vagrant.configure("2") do |config|
node.vm.network :public_network, bridge: get_bridge_adapter()
else
node.vm.network :private_network, ip: NAT_IP_PREFIX + ".#{JUMPER_NAT_START_IP}"
#node.vm.network "forwarded_port", guest: 22, host: "#{2730}"
end
provision_kubernetes_node node
end