chg: Hostnames In Documentation
Changed server to controlplane and node-0 to node01, and node-1 to node02 in the documentation. Also started reformatting to limit lines to 80 characters. Added a section on how to enable to root account for login.pull/881/head
parent
e4d9c25520
commit
55ca1d706d
62
ca.conf
62
ca.conf
|
@ -9,8 +9,8 @@ keyUsage = cRLSign, keyCertSign
|
|||
|
||||
[req_distinguished_name]
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
ST = Michigan
|
||||
L = Redford
|
||||
CN = CA
|
||||
|
||||
[admin]
|
||||
|
@ -46,47 +46,47 @@ CN = service-accounts
|
|||
# that identifies them as being in the `system:nodes` group, with a username
|
||||
# of `system:node:<nodeName>`.
|
||||
|
||||
[node-0]
|
||||
distinguished_name = node-0_distinguished_name
|
||||
[node01]
|
||||
distinguished_name = node01_distinguished_name
|
||||
prompt = no
|
||||
req_extensions = node-0_req_extensions
|
||||
req_extensions = node01_req_extensions
|
||||
|
||||
[node-0_req_extensions]
|
||||
[node01_req_extensions]
|
||||
basicConstraints = CA:FALSE
|
||||
extendedKeyUsage = clientAuth, serverAuth
|
||||
keyUsage = critical, digitalSignature, keyEncipherment
|
||||
nsCertType = client
|
||||
nsComment = "Node-0 Certificate"
|
||||
subjectAltName = DNS:node-0, IP:127.0.0.1
|
||||
nsComment = "node01 Certificate"
|
||||
subjectAltName = DNS:node01, IP:127.0.0.1
|
||||
subjectKeyIdentifier = hash
|
||||
|
||||
[node-0_distinguished_name]
|
||||
CN = system:node:node-0
|
||||
[node01_distinguished_name]
|
||||
CN = system:node:node01
|
||||
O = system:nodes
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
ST = Michigan
|
||||
L = Redford
|
||||
|
||||
[node-1]
|
||||
distinguished_name = node-1_distinguished_name
|
||||
[node02]
|
||||
distinguished_name = node02_distinguished_name
|
||||
prompt = no
|
||||
req_extensions = node-1_req_extensions
|
||||
req_extensions = node02_req_extensions
|
||||
|
||||
[node-1_req_extensions]
|
||||
[node02_req_extensions]
|
||||
basicConstraints = CA:FALSE
|
||||
extendedKeyUsage = clientAuth, serverAuth
|
||||
keyUsage = critical, digitalSignature, keyEncipherment
|
||||
nsCertType = client
|
||||
nsComment = "Node-1 Certificate"
|
||||
subjectAltName = DNS:node-1, IP:127.0.0.1
|
||||
nsComment = "node02 Certificate"
|
||||
subjectAltName = DNS:node02, IP:127.0.0.1
|
||||
subjectKeyIdentifier = hash
|
||||
|
||||
[node-1_distinguished_name]
|
||||
CN = system:node:node-1
|
||||
[node02_distinguished_name]
|
||||
CN = system:node:node02
|
||||
O = system:nodes
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
ST = Michigan
|
||||
L = Redford
|
||||
|
||||
|
||||
# Kube Proxy Section
|
||||
|
@ -108,8 +108,8 @@ subjectKeyIdentifier = hash
|
|||
CN = system:kube-proxy
|
||||
O = system:node-proxier
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
ST = Michigan
|
||||
L = Redford
|
||||
|
||||
|
||||
# Controller Manager
|
||||
|
@ -131,8 +131,8 @@ subjectKeyIdentifier = hash
|
|||
CN = system:kube-controller-manager
|
||||
O = system:kube-controller-manager
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
ST = Michigan
|
||||
L = Redford
|
||||
|
||||
|
||||
# Scheduler
|
||||
|
@ -154,8 +154,8 @@ subjectKeyIdentifier = hash
|
|||
CN = system:kube-scheduler
|
||||
O = system:system:kube-scheduler
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
ST = Michigan
|
||||
L = Redford
|
||||
|
||||
|
||||
# API Server
|
||||
|
@ -187,14 +187,14 @@ DNS.1 = kubernetes.default
|
|||
DNS.2 = kubernetes.default.svc
|
||||
DNS.3 = kubernetes.default.svc.cluster
|
||||
DNS.4 = kubernetes.svc.cluster.local
|
||||
DNS.5 = server.kubernetes.local
|
||||
DNS.5 = controlplane.kubernetes.local
|
||||
DNS.6 = api-server.kubernetes.local
|
||||
|
||||
[kube-api-server_distinguished_name]
|
||||
CN = kubernetes
|
||||
C = US
|
||||
ST = Washington
|
||||
L = Seattle
|
||||
ST = Michigan
|
||||
L = Redford
|
||||
|
||||
|
||||
[default_req_extensions]
|
||||
|
|
|
@ -1,46 +1,107 @@
|
|||
# Provisioning Compute Resources
|
||||
|
||||
Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the machines required for setting up a Kubernetes cluster.
|
||||
Kubernetes requires a set of machines to host the Kubernetes control plane and
|
||||
the worker nodes where containers are ultimately run. In this lab you will
|
||||
provision the machines required for setting up a Kubernetes cluster.
|
||||
|
||||
## Machine Database
|
||||
|
||||
This tutorial will leverage a text file, which will serve as a machine database, to store the various machine attributes that will be used when setting up the Kubernetes control plane and worker nodes. The following schema represents entries in the machine database, one entry per line:
|
||||
This tutorial will leverage a text file, which will serve as a machine database,
|
||||
to store the various machine attributes that will be used when setting up the
|
||||
Kubernetes control plane and worker nodes. The following schema represents
|
||||
entries in the machine database, one entry per line:
|
||||
|
||||
```text
|
||||
IPV4_ADDRESS FQDN HOSTNAME POD_SUBNET
|
||||
```
|
||||
|
||||
Each of the columns corresponds to a machine IP address `IPV4_ADDRESS`, fully qualified domain name `FQDN`, host name `HOSTNAME`, and the IP subnet `POD_SUBNET`. Kubernetes assigns one IP address per `pod` and the `POD_SUBNET` represents the unique IP address range assigned to each machine in the cluster for doing so.
|
||||
Each of the columns corresponds to a machine IP address `IPV4_ADDRESS`, fully
|
||||
qualified domain name `FQDN`, host name `HOSTNAME`, and the IP subnet
|
||||
`POD_SUBNET`. Kubernetes assigns one IP address per `pod` and the `POD_SUBNET`
|
||||
represents the unique IP address range assigned to each machine in the cluster
|
||||
for doing so.
|
||||
|
||||
Here is an example machine database similar to the one used when creating this tutorial. Notice the IP addresses have been masked out. Your machines can be assigned any IP address as long as each machine is reachable from each other and the `jumpbox`.
|
||||
Here is an example machine database similar to the one used when creating this
|
||||
tutorial. Notice the IP addresses have been masked out. Your machines can be
|
||||
assigned any IP address as long as each machine is reachable from each other
|
||||
and the `jumpbox`.
|
||||
|
||||
```bash
|
||||
cat machines.txt
|
||||
```
|
||||
|
||||
```text
|
||||
XXX.XXX.XXX.XXX server.kubernetes.local server
|
||||
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0 10.200.0.0/24
|
||||
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1 10.200.1.0/24
|
||||
XXX.XXX.XXX.XXX controlplane.kubernetes.local controlplane
|
||||
XXX.XXX.XXX.XXX node01.kubernetes.local node01 10.200.0.0/24
|
||||
XXX.XXX.XXX.XXX node02.kubernetes.local node02 10.200.1.0/24
|
||||
```
|
||||
|
||||
Now it's your turn to create a `machines.txt` file with the details for the three machines you will be using to create your Kubernetes cluster. Use the example machine database from above and add the details for your machines.
|
||||
Now it's your turn to create a `machines.txt` file with the details for the
|
||||
three machines you will be using to create your Kubernetes cluster. Use the
|
||||
example machine database from above and add the details for your machines.
|
||||
|
||||
## Enable root Login
|
||||
|
||||
Initially the root account will be locked on all machines. You will need to
|
||||
manually unlock the root account on each virtual machine.
|
||||
|
||||
You'll need to repeat these steps on each machine.
|
||||
|
||||
Login to the machine with the `vagrant` user:
|
||||
|
||||
`vagrant ssh@jumpbox`
|
||||
|
||||
Now set a password for the root account:
|
||||
|
||||
```shell
|
||||
sudo passwd root
|
||||
```
|
||||
|
||||
NOTE: You can choose password **vagrant** to keep it the same as the vagrant
|
||||
user, and there will be only 1 password to remember.
|
||||
|
||||
You'll need to unlock the password of the named account. This option re-enables
|
||||
a password by changing the password back to its previous value. In this case
|
||||
it should be set to the password we just assigned.
|
||||
|
||||
```shell
|
||||
sudo passwd -u root
|
||||
```
|
||||
|
||||
Test that it works by running and entering the password you set:
|
||||
|
||||
```shell
|
||||
su
|
||||
```
|
||||
|
||||
## Configuring SSH Access
|
||||
|
||||
SSH will be used to configure the machines in the cluster. Verify that you have `root` SSH access to each machine listed in your machine database. You may need to enable root SSH access on each node by updating the sshd_config file and restarting the SSH server.
|
||||
SSH will be used to configure the machines in the cluster. Verify that you have
|
||||
`root` SSH access to each machine listed in your machine database. You may need
|
||||
to enable root SSH access on each node by updating the sshd_config file and
|
||||
restarting the SSH server.
|
||||
|
||||
### Enable root SSH Access
|
||||
|
||||
If `root` SSH access is enabled for each of your machines you can skip this section.
|
||||
If `root` SSH access is enabled for each of your machines you can skip this
|
||||
section.
|
||||
|
||||
By default, a new `debian` install disables SSH access for the `root` user. This is done for security reasons as the `root` user has total administrative control of unix-like systems. If a weak password is used on a machine connected to the internet, well, let's just say it's only a matter of time before your machine belongs to someone else. As mentioned earlier, we are going to enable `root` access over SSH in order to streamline the steps in this tutorial. Security is a tradeoff, and in this case, we are optimizing for convenience. Log on to each machine via SSH using your user account, then switch to the `root` user using the `su` command:
|
||||
By default, a new install may disable SSH access for the `root` user. This is
|
||||
done for security reasons as the `root` user has total administrative control
|
||||
of unix-like systems. If a weak password is used on a machine connected to the
|
||||
internet, well, let's just say it's only a matter of time before your machine
|
||||
belongs to someone else. As mentioned earlier, we are going to enable `root`
|
||||
access over SSH in order to streamline the steps in this tutorial. Security is
|
||||
a tradeoff, and in this case, we are optimizing for convenience. Log on to each
|
||||
machine via SSH using your user account, then switch to the `root` user using
|
||||
the `su` command:
|
||||
|
||||
```bash
|
||||
su - root
|
||||
```
|
||||
|
||||
Edit the `/etc/ssh/sshd_config` SSH daemon configuration file and set the `PermitRootLogin` option to `yes`:
|
||||
Edit the `/etc/ssh/sshd_config` SSH daemon configuration file and set the
|
||||
`PermitRootLogin` option to `yes`:
|
||||
|
||||
```bash
|
||||
sed -i \
|
||||
|
@ -56,7 +117,10 @@ systemctl restart sshd
|
|||
|
||||
### Generate and Distribute SSH Keys
|
||||
|
||||
In this section you will generate and distribute an SSH keypair to the `server`, `node-0`, and `node-1`, machines, which will be used to run commands on those machines throughout this tutorial. Run the following commands from the `jumpbox` machine.
|
||||
In this section you will generate and distribute an SSH keypair to the
|
||||
`controlplane`, `node01`, and `node02` machines, which will be used to run
|
||||
commands on those machines throughout this tutorial. Run the following commands
|
||||
from the `jumpbox` machine.
|
||||
|
||||
Generate a new SSH key:
|
||||
|
||||
|
@ -90,16 +154,23 @@ done < machines.txt
|
|||
```
|
||||
|
||||
```text
|
||||
server
|
||||
node-0
|
||||
node-1
|
||||
controlplane
|
||||
node01
|
||||
node02
|
||||
```
|
||||
|
||||
## Hostnames
|
||||
|
||||
In this section you will assign hostnames to the `server`, `node-0`, and `node-1` machines. The hostname will be used when executing commands from the `jumpbox` to each machine. The hostname also plays a major role within the cluster. Instead of Kubernetes clients using an IP address to issue commands to the Kubernetes API server, those clients will use the `server` hostname instead. Hostnames are also used by each worker machine, `node-0` and `node-1` when registering with a given Kubernetes cluster.
|
||||
In this section you will assign hostnames to the `controlplane`, `node01`,
|
||||
and `node02` machines. The hostname will be used when executing commands from
|
||||
the `jumpbox` to each machine. The hostname also plays a major role within the
|
||||
cluster. Instead of Kubernetes clients using an IP address to issue commands to
|
||||
the Kubernetes API server, those clients will use the `controlplane` hostname
|
||||
instead. Hostnames are also used by each worker machine, `node01` and `node02`
|
||||
when registering with a given Kubernetes cluster.
|
||||
|
||||
To configure the hostname for each machine, run the following commands on the `jumpbox`.
|
||||
To configure the hostname for each machine, run the following commands on the
|
||||
`jumpbox`.
|
||||
|
||||
Set the hostname on each machine listed in the `machines.txt` file:
|
||||
|
||||
|
@ -121,14 +192,14 @@ done < machines.txt
|
|||
```
|
||||
|
||||
```text
|
||||
server.kubernetes.local
|
||||
node-0.kubernetes.local
|
||||
node-1.kubernetes.local
|
||||
controlplane.kubernetes.local
|
||||
node01.kubernetes.local
|
||||
node02.kubernetes.local
|
||||
```
|
||||
|
||||
## Host Lookup Table
|
||||
|
||||
In this section you will generate a `hosts` file which will be appended to `/etc/hosts` file on the `jumpbox` and to the `/etc/hosts` files on all three cluster members used for this tutorial. This will allow each machine to be reachable using a hostname such as `server`, `node-0`, or `node-1`.
|
||||
In this section you will generate a `hosts` file which will be appended to `/etc/hosts` file on the `jumpbox` and to the `/etc/hosts` files on all three cluster members used for this tutorial. This will allow each machine to be reachable using a hostname such as `controlplane`, `node01`, or `node02`.
|
||||
|
||||
Create a new `hosts` file and add a header to identify the machines being added:
|
||||
|
||||
|
@ -137,7 +208,8 @@ echo "" > hosts
|
|||
echo "# Kubernetes The Hard Way" >> hosts
|
||||
```
|
||||
|
||||
Generate a host entry for each machine in the `machines.txt` file and append it to the `hosts` file:
|
||||
Generate a host entry for each machine in the `machines.txt` file and append it
|
||||
to the `hosts` file:
|
||||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
|
@ -155,14 +227,15 @@ cat hosts
|
|||
```text
|
||||
|
||||
# Kubernetes The Hard Way
|
||||
XXX.XXX.XXX.XXX server.kubernetes.local server
|
||||
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0
|
||||
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1
|
||||
XXX.XXX.XXX.XXX controlplane.kubernetes.local server
|
||||
XXX.XXX.XXX.XXX node01.kubernetes.local node01
|
||||
XXX.XXX.XXX.XXX node02.kubernetes.local node02
|
||||
```
|
||||
|
||||
## Adding `/etc/hosts` Entries To A Local Machine
|
||||
|
||||
In this section you will append the DNS entries from the `hosts` file to the local `/etc/hosts` file on your `jumpbox` machine.
|
||||
In this section you will append the DNS entries from the `hosts` file to the
|
||||
local `/etc/hosts` file on your `jumpbox` machine.
|
||||
|
||||
Append the DNS entries from `hosts` to `/etc/hosts`:
|
||||
|
||||
|
@ -186,28 +259,30 @@ ff02::1 ip6-allnodes
|
|||
ff02::2 ip6-allrouters
|
||||
|
||||
# Kubernetes The Hard Way
|
||||
XXX.XXX.XXX.XXX server.kubernetes.local server
|
||||
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0
|
||||
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1
|
||||
XXX.XXX.XXX.XXX controlplane.kubernetes.local server
|
||||
XXX.XXX.XXX.XXX node01.kubernetes.local node01
|
||||
XXX.XXX.XXX.XXX node02.kubernetes.local node02
|
||||
```
|
||||
|
||||
At this point you should be able to SSH to each machine listed in the `machines.txt` file using a hostname.
|
||||
At this point you should be able to SSH to each machine listed in the
|
||||
`machines.txt` file using a hostname.
|
||||
|
||||
```bash
|
||||
for host in server node-0 node-1
|
||||
for host in controlplane node01 node02
|
||||
do ssh root@${host} hostname
|
||||
done
|
||||
```
|
||||
|
||||
```text
|
||||
server
|
||||
node-0
|
||||
node-1
|
||||
controlplane
|
||||
node01
|
||||
node02
|
||||
```
|
||||
|
||||
## Adding `/etc/hosts` Entries To The Remote Machines
|
||||
|
||||
In this section you will append the host entries from `hosts` to `/etc/hosts` on each machine listed in the `machines.txt` text file.
|
||||
In this section you will append the host entries from `hosts` to `/etc/hosts`
|
||||
on each machine listed in the `machines.txt` text file.
|
||||
|
||||
Copy the `hosts` file to each machine and append the contents to `/etc/hosts`:
|
||||
|
||||
|
@ -219,6 +294,9 @@ while read IP FQDN HOST SUBNET; do
|
|||
done < machines.txt
|
||||
```
|
||||
|
||||
At this point, hostnames can be used when connecting to machines from your `jumpbox` machine, or any of the three machines in the Kubernetes cluster. Instead of using IP addresses you can now connect to machines using a hostname such as `server`, `node-0`, or `node-1`.
|
||||
At this point, hostnames can be used when connecting to machines from your
|
||||
`jumpbox` machine, or any of the three machines in the Kubernetes cluster.
|
||||
Instead of using IP addresses you can now connect to machines using a hostname
|
||||
such as `controlplane`, `node01`, or `node02`.
|
||||
|
||||
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)
|
||||
|
|
|
@ -42,7 +42,7 @@ Generate the certificates and private keys:
|
|||
|
||||
```bash
|
||||
certs=(
|
||||
"admin" "node-0" "node-1"
|
||||
"admin" "node01" "node02"
|
||||
"kube-proxy" "kube-scheduler"
|
||||
"kube-controller-manager"
|
||||
"kube-api-server"
|
||||
|
@ -77,10 +77,10 @@ ls -1 *.crt *.key *.csr
|
|||
|
||||
In this section you will copy the various certificates to every machine at a path where each Kubernetes component will search for its certificate pair. In a real-world environment these certificates should be treated like a set of sensitive secrets as they are used as credentials by the Kubernetes components to authenticate to each other.
|
||||
|
||||
Copy the appropriate certificates and private keys to the `node-0` and `node-1` machines:
|
||||
Copy the appropriate certificates and private keys to the `node01` and `node02` machines:
|
||||
|
||||
```bash
|
||||
for host in node-0 node-1; do
|
||||
for host in node01 node02; do
|
||||
ssh root@${host} mkdir /var/lib/kubelet/
|
||||
|
||||
scp ca.crt root@${host}:/var/lib/kubelet/
|
||||
|
@ -93,12 +93,12 @@ for host in node-0 node-1; do
|
|||
done
|
||||
```
|
||||
|
||||
Copy the appropriate certificates and private keys to the `server` machine:
|
||||
Copy the appropriate certificates and private keys to the `controlplane` machine:
|
||||
|
||||
```bash
|
||||
scp \
|
||||
ca.key ca.crt \
|
||||
kube-api-server.key kube-api-server.crt \
|
||||
kube-api-controlplane.key kube-api-controlplane.crt \
|
||||
service-accounts.key service-accounts.crt \
|
||||
root@server:~/
|
||||
```
|
||||
|
|
|
@ -12,10 +12,10 @@ When generating kubeconfig files for Kubelets the client certificate matching th
|
|||
|
||||
> The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
|
||||
|
||||
Generate a kubeconfig file for the `node-0` and `node-1` worker nodes:
|
||||
Generate a kubeconfig file for the `node01` and `node02` worker nodes:
|
||||
|
||||
```bash
|
||||
for host in node-0 node-1; do
|
||||
for host in node01 node02; do
|
||||
kubectl config set-cluster kubernetes-the-hard-way \
|
||||
--certificate-authority=ca.crt \
|
||||
--embed-certs=true \
|
||||
|
@ -41,8 +41,8 @@ done
|
|||
Results:
|
||||
|
||||
```text
|
||||
node-0.kubeconfig
|
||||
node-1.kubeconfig
|
||||
node01.kubeconfig
|
||||
node02.kubeconfig
|
||||
```
|
||||
|
||||
### The kube-proxy Kubernetes Configuration File
|
||||
|
@ -184,10 +184,10 @@ admin.kubeconfig
|
|||
|
||||
## Distribute the Kubernetes Configuration Files
|
||||
|
||||
Copy the `kubelet` and `kube-proxy` kubeconfig files to the `node-0` and `node-1` machines:
|
||||
Copy the `kubelet` and `kube-proxy` kubeconfig files to the `node01` and `node02` machines:
|
||||
|
||||
```bash
|
||||
for host in node-0 node-1; do
|
||||
for host in node01 node02; do
|
||||
ssh root@${host} "mkdir -p /var/lib/{kube-proxy,kubelet}"
|
||||
|
||||
scp kube-proxy.kubeconfig \
|
||||
|
|
|
@ -9,7 +9,7 @@ The commands in this section must be run from the `jumpbox`.
|
|||
Copy the Kubernetes binaries and systemd unit files to each worker instance:
|
||||
|
||||
```bash
|
||||
for HOST in node-0 node-1; do
|
||||
for HOST in node01 node02; do
|
||||
SUBNET=$(grep ${HOST} machines.txt | cut -d " " -f 4)
|
||||
sed "s|SUBNET|$SUBNET|g" \
|
||||
configs/10-bridge.conf > 10-bridge.conf
|
||||
|
@ -23,7 +23,7 @@ done
|
|||
```
|
||||
|
||||
```bash
|
||||
for HOST in node-0 node-1; do
|
||||
for HOST in node01 node02; do
|
||||
scp \
|
||||
downloads/worker/* \
|
||||
downloads/client/kubectl \
|
||||
|
@ -38,17 +38,17 @@ done
|
|||
```
|
||||
|
||||
```bash
|
||||
for HOST in node-0 node-1; do
|
||||
for HOST in node01 node02; do
|
||||
scp \
|
||||
downloads/cni-plugins/* \
|
||||
root@${HOST}:~/cni-plugins/
|
||||
done
|
||||
```
|
||||
|
||||
The commands in the next section must be run on each worker instance: `node-0`, `node-1`. Login to the worker instance using the `ssh` command. Example:
|
||||
The commands in the next section must be run on each worker instance: `node01`, `node02`. Login to the worker instance using the `ssh` command. Example:
|
||||
|
||||
```bash
|
||||
ssh root@node-0
|
||||
ssh root@node01
|
||||
```
|
||||
|
||||
## Provisioning a Kubernetes Worker Node
|
||||
|
@ -184,7 +184,7 @@ systemctl is-active kubelet
|
|||
active
|
||||
```
|
||||
|
||||
Be sure to complete the steps in this section on each worker node, `node-0` and `node-1`, before moving on to the next section.
|
||||
Be sure to complete the steps in this section on each worker node, `node01` and `node02`, before moving on to the next section.
|
||||
|
||||
## Verification
|
||||
|
||||
|
@ -200,8 +200,8 @@ ssh root@server \
|
|||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node-0 Ready <none> 1m v1.32.3
|
||||
node-1 Ready <none> 10s v1.32.3
|
||||
node01 Ready <none> 1m v1.32.3
|
||||
node02 Ready <none> 10s v1.32.3
|
||||
```
|
||||
|
||||
Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
|
||||
|
|
|
@ -74,8 +74,8 @@ kubectl get nodes
|
|||
|
||||
```
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
node-0 Ready <none> 10m v1.32.3
|
||||
node-1 Ready <none> 10m v1.32.3
|
||||
node01 Ready <none> 10m v1.32.3
|
||||
node02 Ready <none> 10m v1.32.3
|
||||
```
|
||||
|
||||
Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
|
||||
|
|
|
@ -15,10 +15,10 @@ Print the internal IP address and Pod CIDR range for each worker instance:
|
|||
```bash
|
||||
{
|
||||
SERVER_IP=$(grep server machines.txt | cut -d " " -f 1)
|
||||
NODE_0_IP=$(grep node-0 machines.txt | cut -d " " -f 1)
|
||||
NODE_0_SUBNET=$(grep node-0 machines.txt | cut -d " " -f 4)
|
||||
NODE_1_IP=$(grep node-1 machines.txt | cut -d " " -f 1)
|
||||
NODE_1_SUBNET=$(grep node-1 machines.txt | cut -d " " -f 4)
|
||||
NODE_0_IP=$(grep node01 machines.txt | cut -d " " -f 1)
|
||||
NODE_0_SUBNET=$(grep node01 machines.txt | cut -d " " -f 4)
|
||||
NODE_1_IP=$(grep node02 machines.txt | cut -d " " -f 1)
|
||||
NODE_1_SUBNET=$(grep node02 machines.txt | cut -d " " -f 4)
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -30,13 +30,13 @@ EOF
|
|||
```
|
||||
|
||||
```bash
|
||||
ssh root@node-0 <<EOF
|
||||
ssh root@node01 <<EOF
|
||||
ip route add ${NODE_1_SUBNET} via ${NODE_1_IP}
|
||||
EOF
|
||||
```
|
||||
|
||||
```bash
|
||||
ssh root@node-1 <<EOF
|
||||
ssh root@node02 <<EOF
|
||||
ip route add ${NODE_0_SUBNET} via ${NODE_0_IP}
|
||||
EOF
|
||||
```
|
||||
|
@ -55,7 +55,7 @@ XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
|
|||
```
|
||||
|
||||
```bash
|
||||
ssh root@node-0 ip route
|
||||
ssh root@node01 ip route
|
||||
```
|
||||
|
||||
```text
|
||||
|
@ -65,7 +65,7 @@ XXX.XXX.XXX.0/24 dev ens160 proto kernel scope link src XXX.XXX.XXX.XXX
|
|||
```
|
||||
|
||||
```bash
|
||||
ssh root@node-1 ip route
|
||||
ssh root@node02 ip route
|
||||
```
|
||||
|
||||
```text
|
||||
|
|
|
@ -22,7 +22,7 @@ ExecStart=/usr/local/bin/kube-apiserver \
|
|||
--runtime-config='api/all=true' \
|
||||
--service-account-key-file=/var/lib/kubernetes/service-accounts.crt \
|
||||
--service-account-signing-key-file=/var/lib/kubernetes/service-accounts.key \
|
||||
--service-account-issuer=https://server.kubernetes.local:6443 \
|
||||
--service-account-issuer=https://controlplane.kubernetes.local:6443 \
|
||||
--service-node-port-range=30000-32767 \
|
||||
--tls-cert-file=/var/lib/kubernetes/kube-api-server.crt \
|
||||
--tls-private-key-file=/var/lib/kubernetes/kube-api-server.key \
|
||||
|
|
Loading…
Reference in New Issue