chg: User from root To vagrant
This commit modifies the instruactions so that they use the vagrant user instead of root. Also sudo is now requierd for a significant amount of the commands.pull/882/head
parent
b1fe36516e
commit
84c96710a3
|
@ -1,4 +1,9 @@
|
|||
admin-csr.json
|
||||
# Directories
|
||||
# -----------
|
||||
.idea/
|
||||
.vagrant/
|
||||
|
||||
admin-csr.json
|
||||
admin-key.pem
|
||||
admin.csr
|
||||
admin.pem
|
||||
|
@ -47,5 +52,4 @@ service-account-key.pem
|
|||
service-account.csr
|
||||
service-account.pem
|
||||
service-account-csr.json
|
||||
*.swp
|
||||
.idea/
|
||||
*.swp
|
|
@ -25,7 +25,9 @@ everything up.
|
|||
|
||||
### Install Command Line Utilities
|
||||
|
||||
Now that you are logged into the `jumpbox` machine as the `root` user, you will install the command line utilities that will be used to preform various tasks throughout the tutorial.
|
||||
Now that you are logged into the `jumpbox` machine as the `root` user, you will
|
||||
install the command line utilities that will be used to preform various tasks
|
||||
throughout the tutorial.
|
||||
|
||||
```bash
|
||||
{
|
||||
|
@ -36,11 +38,14 @@ Now that you are logged into the `jumpbox` machine as the `root` user, you will
|
|||
|
||||
### Sync GitHub Repository
|
||||
|
||||
Now it's time to download a copy of this tutorial which contains the configuration files and templates that will be used build your Kubernetes cluster from the ground up. Clone the Kubernetes The Hard Way git repository using the `git` command:
|
||||
Now it's time to download a copy of this tutorial which contains the
|
||||
configuration files and templates that will be used to build your Kubernetes
|
||||
cluster from the ground up. Clone the Kubernetes The Hard Way git repository
|
||||
using the `git` command:
|
||||
|
||||
```bash
|
||||
git clone --depth 1 \
|
||||
https://github.com/kelseyhightower/kubernetes-the-hard-way.git
|
||||
https://github.com/b01/kubernetes-the-hard-way.git
|
||||
```
|
||||
|
||||
Change into the `kubernetes-the-hard-way` directory:
|
||||
|
@ -49,27 +54,36 @@ Change into the `kubernetes-the-hard-way` directory:
|
|||
cd kubernetes-the-hard-way
|
||||
```
|
||||
|
||||
This will be the working directory for the rest of the tutorial. If you ever get lost run the `pwd` command to verify you are in the right directory when running commands on the `jumpbox`:
|
||||
This will be the working directory for the rest of the tutorial. If you ever
|
||||
get lost run the `pwd` command to verify you are in the right directory when
|
||||
running commands on the `jumpbox`:
|
||||
|
||||
```bash
|
||||
pwd
|
||||
```
|
||||
|
||||
```text
|
||||
/root/kubernetes-the-hard-way
|
||||
/home/vagrant/kubernetes-the-hard-way
|
||||
```
|
||||
|
||||
### Download Binaries
|
||||
|
||||
In this section you will download the binaries for the various Kubernetes components. The binaries will be stored in the `downloads` directory on the `jumpbox`, which will reduce the amount of internet bandwidth required to complete this tutorial as we avoid downloading the binaries multiple times for each machine in our Kubernetes cluster.
|
||||
In this section you will download the binaries for the various Kubernetes
|
||||
components. The binaries will be stored in the `downloads` directory on the
|
||||
`jumpbox`, which will reduce the amount of internet bandwidth required to
|
||||
complete this tutorial as we avoid downloading the binaries multiple times
|
||||
for each machine in our Kubernetes cluster.
|
||||
|
||||
The binaries that will be downloaded are listed in either the `downloads-amd64.txt` or `downloads-arm64.txt` file depending on your hardware architecture, which you can review using the `cat` command:
|
||||
The binaries that will be downloaded are listed in either the
|
||||
`downloads-amd64.txt` or `downloads-arm64.txt` file depending on your hardware
|
||||
architecture, which you can review using the `cat` command:
|
||||
|
||||
```bash
|
||||
cat downloads-$(dpkg --print-architecture).txt
|
||||
```
|
||||
|
||||
Download the binaries into a directory called `downloads` using the `wget` command:
|
||||
Download the binaries into a directory called `downloads` using the `wget`
|
||||
command:
|
||||
|
||||
```bash
|
||||
wget -q --show-progress \
|
||||
|
@ -79,30 +93,40 @@ wget -q --show-progress \
|
|||
-i downloads-$(dpkg --print-architecture).txt
|
||||
```
|
||||
|
||||
Depending on your internet connection speed it may take a while to download over `500` megabytes of binaries, and once the download is complete, you can list them using the `ls` command:
|
||||
Depending on your internet connection speed it may take a while to download
|
||||
over `500` megabytes of binaries, and once the download is complete, you can
|
||||
list them using the `ls` command:
|
||||
|
||||
```bash
|
||||
ls -oh downloads
|
||||
```
|
||||
|
||||
Extract the component binaries from the release archives and organize them under the `downloads` directory.
|
||||
Extract the component binaries from the release archives and organize them
|
||||
under the `downloads` directory.
|
||||
|
||||
```bash
|
||||
{
|
||||
KUBE_VER=v1.33.1
|
||||
CRI_VER=v1.33.0
|
||||
RUNC_VER=v1.3.0
|
||||
CNI_VER=v1.7.1
|
||||
CONTAINERD_VER=2.1.1
|
||||
ETCD_VER=v3.6.0
|
||||
|
||||
ARCH=$(dpkg --print-architecture)
|
||||
mkdir -p downloads/{client,cni-plugins,controller,worker}
|
||||
tar -xvf downloads/crictl-v1.32.0-linux-${ARCH}.tar.gz \
|
||||
tar -xvf downloads/crictl-${CRI_VER}-linux-${ARCH}.tar.gz \
|
||||
-C downloads/worker/
|
||||
tar -xvf downloads/containerd-2.1.0-beta.0-linux-${ARCH}.tar.gz \
|
||||
tar -xvf downloads/containerd-${CONTAINERD_VER}-linux-${ARCH}.tar.gz \
|
||||
--strip-components 1 \
|
||||
-C downloads/worker/
|
||||
tar -xvf downloads/cni-plugins-linux-${ARCH}-v1.6.2.tgz \
|
||||
tar -xvf downloads/cni-plugins-linux-${ARCH}-${CNI_VER}.tgz \
|
||||
-C downloads/cni-plugins/
|
||||
tar -xvf downloads/etcd-v3.6.0-rc.3-linux-${ARCH}.tar.gz \
|
||||
tar -xvf downloads/etcd-${ETCD_VER}-linux-${ARCH}.tar.gz \
|
||||
-C downloads/ \
|
||||
--strip-components 1 \
|
||||
etcd-v3.6.0-rc.3-linux-${ARCH}/etcdctl \
|
||||
etcd-v3.6.0-rc.3-linux-${ARCH}/etcd
|
||||
etcd-${ETCD_VER}-linux-${ARCH}/etcdctl \
|
||||
etcd-${ETCD_VER}-linux-${ARCH}/etcd
|
||||
mv downloads/{etcdctl,kubectl} downloads/client/
|
||||
mv downloads/{etcd,kube-apiserver,kube-controller-manager,kube-scheduler} \
|
||||
downloads/controller/
|
||||
|
@ -125,17 +149,22 @@ Make the binaries executable.
|
|||
|
||||
### Install kubectl
|
||||
|
||||
In this section you will install the `kubectl`, the official Kubernetes client command line tool, on the `jumpbox` machine. `kubectl` will be used to interact with the Kubernetes control plane once your cluster is provisioned later in this tutorial.
|
||||
In this section you will install the `kubectl`, the official Kubernetes client
|
||||
command line tool, on the `jumpbox` machine. `kubectl` will be used to interact
|
||||
with the Kubernetes control plane once your cluster is provisioned later in
|
||||
this tutorial.
|
||||
|
||||
Use the `chmod` command to make the `kubectl` binary executable and move it to the `/usr/local/bin/` directory:
|
||||
Use the `chmod` command to make the `kubectl` binary executable and move it to
|
||||
the `/usr/local/bin/` directory:
|
||||
|
||||
```bash
|
||||
{
|
||||
cp downloads/client/kubectl /usr/local/bin/
|
||||
sudo cp downloads/client/kubectl /usr/local/bin/
|
||||
}
|
||||
```
|
||||
|
||||
At this point `kubectl` is installed and can be verified by running the `kubectl` command:
|
||||
At this point `kubectl` is installed and can be verified by running the
|
||||
`kubectl` command:
|
||||
|
||||
```bash
|
||||
kubectl version --client
|
||||
|
@ -143,9 +172,10 @@ kubectl version --client
|
|||
|
||||
```text
|
||||
Client Version: v1.33.1
|
||||
Kustomize Version: v5.5.0
|
||||
Kustomize Version: v5.6.0
|
||||
```
|
||||
|
||||
At this point the `jumpbox` has been set up with all the command line tools and utilities necessary to complete the labs in this tutorial.
|
||||
At this point the `jumpbox` has been set up with all the command line tools
|
||||
and utilities necessary to complete the labs in this tutorial.
|
||||
|
||||
Next: [Provisioning Compute Resources](03-compute-resources.md)
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Kubernetes requires a set of machines to host the Kubernetes control plane and
|
||||
the worker nodes where containers are ultimately run. In this lab you will
|
||||
provision the machines required for setting up a Kubernetes cluster.
|
||||
ready the machines you have provisioned for setting up a Kubernetes cluster.
|
||||
|
||||
## Machine Database
|
||||
|
||||
|
@ -39,81 +39,8 @@ XXX.XXX.XXX.XXX node02.kubernetes.local node02 10.200.1.0/24
|
|||
Now it's your turn to create a `machines.txt` file with the details for the
|
||||
three machines you will be using to create your Kubernetes cluster. Use the
|
||||
example machine database from above and add the details for your machines.
|
||||
|
||||
## Enable root Login
|
||||
|
||||
Initially the root account will be locked on all machines. You will need to
|
||||
manually unlock the root account on each virtual machine.
|
||||
|
||||
You'll need to repeat these steps on each machine.
|
||||
|
||||
Login to the machine with the `vagrant` user:
|
||||
|
||||
`vagrant ssh@jumpbox`
|
||||
|
||||
Now set a password for the root account:
|
||||
|
||||
```shell
|
||||
sudo passwd root
|
||||
```
|
||||
|
||||
NOTE: You can choose password **vagrant** to keep it the same as the vagrant
|
||||
user, and there will be only 1 password to remember.
|
||||
|
||||
You'll need to unlock the password of the named account. This option re-enables
|
||||
a password by changing the password back to its previous value. In this case
|
||||
it should be set to the password we just assigned.
|
||||
|
||||
```shell
|
||||
sudo passwd -u root
|
||||
```
|
||||
|
||||
Test that it works by running and entering the password you set:
|
||||
|
||||
```shell
|
||||
su
|
||||
```
|
||||
|
||||
## Configuring SSH Access
|
||||
|
||||
SSH will be used to configure the machines in the cluster. Verify that you have
|
||||
`root` SSH access to each machine listed in your machine database. You may need
|
||||
to enable root SSH access on each node by updating the sshd_config file and
|
||||
restarting the SSH server.
|
||||
|
||||
### Enable root SSH Access
|
||||
|
||||
If `root` SSH access is enabled for each of your machines you can skip this
|
||||
section.
|
||||
|
||||
By default, a new install may disable SSH access for the `root` user. This is
|
||||
done for security reasons as the `root` user has total administrative control
|
||||
of unix-like systems. If a weak password is used on a machine connected to the
|
||||
internet, well, let's just say it's only a matter of time before your machine
|
||||
belongs to someone else. As mentioned earlier, we are going to enable `root`
|
||||
access over SSH in order to streamline the steps in this tutorial. Security is
|
||||
a tradeoff, and in this case, we are optimizing for convenience. Log on to each
|
||||
machine via SSH using your user account, then switch to the `root` user using
|
||||
the `su` command:
|
||||
|
||||
```bash
|
||||
su - root
|
||||
```
|
||||
|
||||
Edit the `/etc/ssh/sshd_config` SSH daemon configuration file and set the
|
||||
`PermitRootLogin` option to `yes`:
|
||||
|
||||
```bash
|
||||
sed -i \
|
||||
's/^#*PermitRootLogin.*/PermitRootLogin yes/' \
|
||||
/etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
Restart the `sshd` SSH server to pick up the updated configuration file:
|
||||
|
||||
```bash
|
||||
systemctl restart sshd
|
||||
```
|
||||
NOTE: Do NOT leave a newline at the end of the file, or you will get an error
|
||||
when using it in the for loops.
|
||||
|
||||
### Generate and Distribute SSH Keys
|
||||
|
||||
|
@ -141,7 +68,7 @@ Copy the SSH public key to each machine:
|
|||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
ssh-copy-id root@${IP}
|
||||
ssh-copy-id vagrant@${IP}
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
|
@ -149,7 +76,7 @@ Once each key is added, verify SSH public key access is working:
|
|||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
ssh -n root@${IP} hostname
|
||||
ssh -n vagrant@${IP} hostname
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
|
@ -176,10 +103,10 @@ Set the hostname on each machine listed in the `machines.txt` file:
|
|||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
CMD="sed -i 's/^127.0.1.1.*/127.0.1.1\t${FQDN} ${HOST}/' /etc/hosts"
|
||||
ssh -n root@${IP} "$CMD"
|
||||
ssh -n root@${IP} hostnamectl set-hostname ${HOST}
|
||||
ssh -n root@${IP} systemctl restart systemd-hostnamed
|
||||
CMD="sudo sed -i 's/^127.0.1.1.*/127.0.1.1\t${FQDN} ${HOST}/' /etc/hosts"
|
||||
ssh -n vagrant@${IP} "$CMD"
|
||||
ssh -n vagrant@${IP} sudo hostnamectl set-hostname ${HOST}
|
||||
ssh -n vagrant@${IP} sudo systemctl restart systemd-hostnamed
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
|
@ -187,7 +114,7 @@ Verify the hostname is set on each machine:
|
|||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
ssh -n root@${IP} hostname --fqdn
|
||||
ssh -n vagrant@${IP} hostname --fqdn
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
|
@ -199,7 +126,10 @@ node02.kubernetes.local
|
|||
|
||||
## Host Lookup Table
|
||||
|
||||
In this section you will generate a `hosts` file which will be appended to `/etc/hosts` file on the `jumpbox` and to the `/etc/hosts` files on all three cluster members used for this tutorial. This will allow each machine to be reachable using a hostname such as `controlplane`, `node01`, or `node02`.
|
||||
In this section you will generate a `hosts` file which will be appended to
|
||||
`/etc/hosts` file on the `jumpbox` and to the `/etc/hosts` files on all three
|
||||
cluster members used for this tutorial. This will allow each machine to be
|
||||
reachable using a hostname such as `controlplane`, `node01`, or `node02`.
|
||||
|
||||
Create a new `hosts` file and add a header to identify the machines being added:
|
||||
|
||||
|
@ -240,7 +170,7 @@ local `/etc/hosts` file on your `jumpbox` machine.
|
|||
Append the DNS entries from `hosts` to `/etc/hosts`:
|
||||
|
||||
```bash
|
||||
cat hosts >> /etc/hosts
|
||||
cat hosts | sudo tee -a /etc/hosts
|
||||
```
|
||||
|
||||
Verify that the `/etc/hosts` file has been updated:
|
||||
|
@ -269,7 +199,7 @@ At this point you should be able to SSH to each machine listed in the
|
|||
|
||||
```bash
|
||||
for host in controlplane node01 node02
|
||||
do ssh root@${host} hostname
|
||||
do ssh vagrant@${host} hostname
|
||||
done
|
||||
```
|
||||
|
||||
|
@ -288,9 +218,9 @@ Copy the `hosts` file to each machine and append the contents to `/etc/hosts`:
|
|||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
scp hosts root@${HOST}:~/
|
||||
scp hosts vagrant@${HOST}:~/
|
||||
ssh -n \
|
||||
root@${HOST} "cat hosts >> /etc/hosts"
|
||||
vagrant@${HOST} "cat hosts | sudo tee -a /etc/hosts"
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
|
|
|
@ -21,9 +21,14 @@ Take a moment to review the `ca.conf` configuration file:
|
|||
cat ca.conf
|
||||
```
|
||||
|
||||
You don't need to understand everything in the `ca.conf` file to complete this tutorial, but you should consider it a starting point for learning `openssl` and the configuration that goes into managing certificates at a high level.
|
||||
You don't need to understand everything in the `ca.conf` file to complete this
|
||||
tutorial, but you should consider it a starting point for learning `openssl`
|
||||
and the configuration that goes into managing certificates at a high level.
|
||||
|
||||
Every certificate authority starts with a private key and root certificate. In this section we are going to create a self-signed certificate authority, and while that's all we need for this tutorial, this shouldn't be considered something you would do in a real-world production environment.
|
||||
Every certificate authority starts with a private key and root certificate. In
|
||||
this section we are going to create a self-signed certificate authority, and
|
||||
while that's all we need for this tutorial, this shouldn't be considered
|
||||
something you would do in a real-world production environment.
|
||||
|
||||
Generate the CA configuration file, certificate, and private key:
|
||||
|
||||
|
@ -45,7 +50,8 @@ ca.crt ca.key
|
|||
|
||||
## Create Client and Server Certificates
|
||||
|
||||
In this section you will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes `admin` user.
|
||||
In this section you will generate client and server certificates for each
|
||||
Kubernetes component and a client certificate for the Kubernetes `admin` user.
|
||||
|
||||
Generate the certificates and private keys:
|
||||
|
||||
|
@ -76,7 +82,9 @@ for i in ${certs[*]}; do
|
|||
done
|
||||
```
|
||||
|
||||
The results of running the above command will generate a private key, certificate request, and signed SSL certificate for each of the Kubernetes components. You can list the generated files with the following command:
|
||||
The results of running the above command will generate a private key,
|
||||
certificate request, and signed SSL certificate for each of the Kubernetes
|
||||
components. You can list the generated files with the following command:
|
||||
|
||||
```bash
|
||||
ls -1 *.crt *.key *.csr
|
||||
|
@ -84,21 +92,27 @@ ls -1 *.crt *.key *.csr
|
|||
|
||||
## Distribute the Client and Server Certificates
|
||||
|
||||
In this section you will copy the various certificates to every machine at a path where each Kubernetes component will search for its certificate pair. In a real-world environment these certificates should be treated like a set of sensitive secrets as they are used as credentials by the Kubernetes components to authenticate to each other.
|
||||
In this section you will copy the various certificates to every machine at a
|
||||
path where each Kubernetes component will search for its certificate pair. In
|
||||
a real-world environment these certificates should be treated like a set of
|
||||
sensitive secrets as they are used as credentials by the Kubernetes components
|
||||
to authenticate to each other.
|
||||
|
||||
Copy the appropriate certificates and private keys to the `node01` and `node02` machines:
|
||||
Copy the appropriate certificates and private keys to the `node01` and `node02`
|
||||
machines:
|
||||
|
||||
```bash
|
||||
for host in node01 node02; do
|
||||
ssh root@${host} mkdir -p /var/lib/kubelet/
|
||||
ssh vagrant@${host} sudo mkdir -p /var/lib/kubelet/
|
||||
|
||||
scp ca.crt root@${host}:/var/lib/kubelet/
|
||||
scp ca.crt vagrant@${host}:~/
|
||||
ssh -n vagrant@${host} "sudo mv ca.crt /var/lib/kubelet/ca.crt"
|
||||
|
||||
scp ${host}.crt \
|
||||
root@${host}:/var/lib/kubelet/kubelet.crt
|
||||
scp ${host}.crt vagrant@${host}:~/
|
||||
ssh -n vagrant@${host} "sudo mv ${host}.crt /var/lib/kubelet/kubelet.crt"
|
||||
|
||||
scp ${host}.key \
|
||||
root@${host}:/var/lib/kubelet/kubelet.key
|
||||
scp ${host}.key vagrant@${host}:~/
|
||||
ssh -n vagrant@${host} "sudo mv ${host}.key /var/lib/kubelet/kubelet.key"
|
||||
done
|
||||
```
|
||||
|
||||
|
@ -109,9 +123,11 @@ scp \
|
|||
ca.key ca.crt \
|
||||
kube-apiserver.key kube-apiserver.crt \
|
||||
service-accounts.key service-accounts.crt \
|
||||
root@controlplane:~/
|
||||
vagrant@controlplane:~/
|
||||
```
|
||||
|
||||
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
|
||||
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet`
|
||||
> client certificates will be used to generate client authentication
|
||||
> configuration files in the next lab.
|
||||
|
||||
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Generating Kubernetes Configuration Files for Authentication
|
||||
|
||||
In this lab you will generate [Kubernetes client configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/),
|
||||
In this lab you will generate [Kubernetes client configuration files],
|
||||
typically called kubeconfigs, which configure Kubernetes clients to connect
|
||||
and authenticate to Kubernetes API Servers.
|
||||
|
||||
|
@ -13,11 +13,10 @@ In this section you will generate kubeconfig files for the `kubelet` and the
|
|||
|
||||
When generating kubeconfig files for Kubelets the client certificate matching
|
||||
the Kubelet's node name must be used. This will ensure Kubelets are properly
|
||||
authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/reference/access-authn-authz/node/).
|
||||
authorized by the Kubernetes [Node Authorizer].
|
||||
|
||||
> The following commands must be run in the same directory used to generate
|
||||
> the SSL certificates during the
|
||||
> [Generating TLS Certificates](04-certificate-authority.md) lab.
|
||||
> the SSL certificates during the [Generating TLS Certificates] lab.
|
||||
|
||||
Generate a kubeconfig file for the `node01` and `node02` worker nodes:
|
||||
|
||||
|
@ -191,27 +190,35 @@ admin.kubeconfig
|
|||
|
||||
## Distribute the Kubernetes Configuration Files
|
||||
|
||||
Copy the `kubelet` and `kube-proxy` kubeconfig files to the `node01` and `node02` machines:
|
||||
Copy the `kubelet` and `kube-proxy` kubeconfig files to the `node01` and
|
||||
`node02` machines:
|
||||
|
||||
```bash
|
||||
for host in node01 node02; do
|
||||
ssh root@${host} "mkdir -p /var/lib/{kube-proxy,kubelet}"
|
||||
ssh vagrant@${host} "sudo mkdir -p /var/lib/{kube-proxy,kubelet}"
|
||||
|
||||
scp kube-proxy.kubeconfig \
|
||||
root@${host}:/var/lib/kube-proxy/kubeconfig
|
||||
scp kube-proxy.kubeconfig vagrant@${host}:~/
|
||||
ssh vagrant@${host} "sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig"
|
||||
|
||||
scp ${host}.kubeconfig \
|
||||
root@${host}:/var/lib/kubelet/kubeconfig
|
||||
scp ${host}.kubeconfig vagrant@${host}:~/
|
||||
ssh vagrant@${host} "sudo mv ${host}.kubeconfig /var/lib/kubelet/kubeconfig"
|
||||
done
|
||||
```
|
||||
|
||||
Copy the `kube-controller-manager` and `kube-scheduler` kubeconfig files to the `controlplane` machine:
|
||||
Copy the `kube-controller-manager` and `kube-scheduler` kubeconfig files to
|
||||
the `controlplane` machine:
|
||||
|
||||
```bash
|
||||
scp admin.kubeconfig \
|
||||
kube-controller-manager.kubeconfig \
|
||||
kube-scheduler.kubeconfig \
|
||||
root@controlplane:~/
|
||||
vagrant@controlplane:~/
|
||||
```
|
||||
|
||||
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
|
||||
|
||||
---
|
||||
|
||||
[Kubernetes client configuration files]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
|
||||
[Node Authorizer]: https://kubernetes.io/docs/reference/access-authn-authz/node/
|
||||
[Generating TLS Certificates]: 04-certificate-authority.md
|
|
@ -28,7 +28,7 @@ Copy the `encryption-config.yaml` encryption config file to each controller
|
|||
instance:
|
||||
|
||||
```bash
|
||||
scp encryption-config.yaml root@controlplane:~/
|
||||
scp encryption-config.yaml vagrant@controlplane:~/
|
||||
```
|
||||
|
||||
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)
|
||||
|
|
|
@ -12,14 +12,14 @@ scp \
|
|||
downloads/controller/etcd \
|
||||
downloads/client/etcdctl \
|
||||
units/etcd.service \
|
||||
root@controlplane:~/
|
||||
vagrant@controlplane:~/
|
||||
```
|
||||
|
||||
The commands in this lab must be run on the `controlplane` machine. Login to
|
||||
the `controlplane` machine using the `ssh` command. Example:
|
||||
|
||||
```bash
|
||||
ssh root@controlplane
|
||||
ssh vagrant@controlplane
|
||||
```
|
||||
|
||||
## Bootstrapping an etcd Cluster
|
||||
|
@ -30,7 +30,7 @@ Extract and install the `etcd` server and the `etcdctl` command line utility:
|
|||
|
||||
```bash
|
||||
{
|
||||
mv etcd etcdctl /usr/local/bin/
|
||||
sudo mv etcd etcdctl /usr/local/bin/
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -38,9 +38,9 @@ Extract and install the `etcd` server and the `etcdctl` command line utility:
|
|||
|
||||
```bash
|
||||
{
|
||||
mkdir -p /etc/etcd /var/lib/etcd
|
||||
chmod 700 /var/lib/etcd
|
||||
cp ca.crt kube-apiserver.key kube-apiserver.crt \
|
||||
sudo mkdir -p /etc/etcd /var/lib/etcd
|
||||
sudo chmod 700 /var/lib/etcd
|
||||
sudo cp ca.crt kube-apiserver.key kube-apiserver.crt \
|
||||
/etc/etcd/
|
||||
}
|
||||
```
|
||||
|
@ -51,16 +51,16 @@ name to match the hostname of the current compute instance:
|
|||
Create the `etcd.service` systemd unit file:
|
||||
|
||||
```bash
|
||||
mv etcd.service /etc/systemd/system/
|
||||
sudo mv etcd.service /etc/systemd/system/
|
||||
```
|
||||
|
||||
### Start the etcd Server
|
||||
|
||||
```bash
|
||||
{
|
||||
systemctl daemon-reload
|
||||
systemctl enable etcd
|
||||
systemctl start etcd
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable etcd
|
||||
sudo systemctl start etcd.service
|
||||
}
|
||||
```
|
||||
|
||||
|
|
|
@ -1,10 +1,13 @@
|
|||
# Bootstrapping the Kubernetes Control Plane
|
||||
|
||||
In this lab you will bootstrap the Kubernetes control plane. The following components will be installed on the `controlplane` machine: Kubernetes API Server, Scheduler, and Controller Manager.
|
||||
In this lab you will bootstrap the Kubernetes control plane. The following
|
||||
components will be installed on the `controlplane` machine: Kubernetes API
|
||||
Server, Scheduler, and Controller Manager.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Connect to the `jumpbox` and copy Kubernetes binaries and systemd unit files to the `controlplane` machine:
|
||||
Connect to the `jumpbox` and copy Kubernetes binaries and systemd unit files
|
||||
to the `controlplane` machine:
|
||||
|
||||
```bash
|
||||
scp \
|
||||
|
@ -17,13 +20,14 @@ scp \
|
|||
units/kube-scheduler.service \
|
||||
configs/kube-scheduler.yaml \
|
||||
configs/kube-apiserver-to-kubelet.yaml \
|
||||
root@controlplane:~/
|
||||
vagrant@controlplane:~/
|
||||
```
|
||||
|
||||
The commands in this lab must be run on the `controlplane` machine. Login to the `controlplane` machine using the `ssh` command. Example:
|
||||
The commands in this lab must be run on the `controlplane` machine. Login to
|
||||
the `controlplane` machine using the `ssh` command. Example:
|
||||
|
||||
```bash
|
||||
ssh root@controlplane
|
||||
ssh vagrant@controlplane
|
||||
```
|
||||
|
||||
## Provision the Kubernetes Control Plane
|
||||
|
@ -31,7 +35,7 @@ ssh root@controlplane
|
|||
Create the Kubernetes configuration directory:
|
||||
|
||||
```bash
|
||||
mkdir -p /etc/kubernetes/config
|
||||
sudo mkdir -p /etc/kubernetes/config
|
||||
```
|
||||
|
||||
### Install the Kubernetes Controller Binaries
|
||||
|
@ -40,7 +44,7 @@ Install the Kubernetes binaries:
|
|||
|
||||
```bash
|
||||
{
|
||||
mv kube-apiserver \
|
||||
sudo mv kube-apiserver \
|
||||
kube-controller-manager \
|
||||
kube-scheduler kubectl \
|
||||
/usr/local/bin/
|
||||
|
@ -51,9 +55,9 @@ Install the Kubernetes binaries:
|
|||
|
||||
```bash
|
||||
{
|
||||
mkdir -p /var/lib/kubernetes/
|
||||
sudo mkdir -p /var/lib/kubernetes/
|
||||
|
||||
mv ca.crt ca.key \
|
||||
sudo mv ca.crt ca.key \
|
||||
kube-apiserver.key kube-apiserver.crt \
|
||||
service-accounts.key service-accounts.crt \
|
||||
encryption-config.yaml \
|
||||
|
@ -64,7 +68,7 @@ Install the Kubernetes binaries:
|
|||
Create the `kube-apiserver.service` systemd unit file:
|
||||
|
||||
```bash
|
||||
mv kube-apiserver.service \
|
||||
sudo mv kube-apiserver.service \
|
||||
/etc/systemd/system/kube-apiserver.service
|
||||
```
|
||||
|
||||
|
@ -73,13 +77,13 @@ mv kube-apiserver.service \
|
|||
Move the `kube-controller-manager` kubeconfig into place:
|
||||
|
||||
```bash
|
||||
mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
||||
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
|
||||
```
|
||||
|
||||
Create the `kube-controller-manager.service` systemd unit file:
|
||||
|
||||
```bash
|
||||
mv kube-controller-manager.service /etc/systemd/system/
|
||||
sudo mv kube-controller-manager.service /etc/systemd/system/
|
||||
```
|
||||
|
||||
### Configure the Kubernetes Scheduler
|
||||
|
@ -87,31 +91,31 @@ mv kube-controller-manager.service /etc/systemd/system/
|
|||
Move the `kube-scheduler` kubeconfig into place:
|
||||
|
||||
```bash
|
||||
mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
||||
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
|
||||
```
|
||||
|
||||
Create the `kube-scheduler.yaml` configuration file:
|
||||
|
||||
```bash
|
||||
mv kube-scheduler.yaml /etc/kubernetes/config/
|
||||
sudo mv kube-scheduler.yaml /etc/kubernetes/config/
|
||||
```
|
||||
|
||||
Create the `kube-scheduler.service` systemd unit file:
|
||||
|
||||
```bash
|
||||
mv kube-scheduler.service /etc/systemd/system/
|
||||
sudo mv kube-scheduler.service /etc/systemd/system/
|
||||
```
|
||||
|
||||
### Start the Controller Services
|
||||
|
||||
```bash
|
||||
{
|
||||
systemctl daemon-reload
|
||||
sudo systemctl daemon-reload
|
||||
|
||||
systemctl enable kube-apiserver \
|
||||
sudo systemctl enable kube-apiserver \
|
||||
kube-controller-manager kube-scheduler
|
||||
|
||||
systemctl start kube-apiserver \
|
||||
sudo systemctl start kube-apiserver \
|
||||
kube-controller-manager kube-scheduler
|
||||
}
|
||||
```
|
||||
|
@ -127,7 +131,10 @@ systemctl is-active kube-apiserver
|
|||
For a more detailed status check, which includes additional process information and log messages, use the `systemctl status` command:
|
||||
|
||||
```bash
|
||||
systemctl status kube-apiserver
|
||||
sudo systemctl status kube-apiserver
|
||||
sudo systemctl status kube-controller-manager
|
||||
|
||||
sudo systemctl status kube-scheduler
|
||||
```
|
||||
|
||||
If you run into any errors, or want to view the logs for any of the control plane components, use the `journalctl` command. For example, to view the logs for the `kube-apiserver` run the following command:
|
||||
|
|
|
@ -0,0 +1,74 @@
|
|||
# Enable root Login
|
||||
|
||||
Initially the root account will be locked on all machines. You will need to
|
||||
manually unlock the root account on each virtual machine.
|
||||
|
||||
You'll need to repeat these steps on each machine.
|
||||
|
||||
Login to the machine with the `vagrant` user:
|
||||
|
||||
`vagrant ssh@jumpbox`
|
||||
|
||||
Now set a password for the root account:
|
||||
|
||||
```shell
|
||||
sudo passwd root
|
||||
```
|
||||
|
||||
NOTE: You can choose password **vagrant** to keep it the same as the vagrant
|
||||
user, and there will be only 1 password to remember.
|
||||
|
||||
You'll need to unlock the password of the named account. This option re-enables
|
||||
a password by changing the password back to its previous value. In this case
|
||||
it should be set to the password we just assigned.
|
||||
|
||||
```shell
|
||||
sudo passwd -u root
|
||||
```
|
||||
|
||||
Test that it works by running and entering the password you set:
|
||||
|
||||
```shell
|
||||
su
|
||||
```
|
||||
|
||||
## Configuring SSH Access
|
||||
|
||||
SSH will be used to configure the machines in the cluster. Verify that you have
|
||||
`root` SSH access to each machine listed in your machine database. You may need
|
||||
to enable root SSH access on each node by updating the sshd_config file and
|
||||
restarting the SSH server.
|
||||
|
||||
### Enable root SSH Access
|
||||
|
||||
If `root` SSH access is enabled for each of your machines you can skip this
|
||||
section.
|
||||
|
||||
By default, a new install may disable SSH access for the `root` user. This is
|
||||
done for security reasons as the `root` user has total administrative control
|
||||
of unix-like systems. If a weak password is used on a machine connected to the
|
||||
internet, well, let's just say it's only a matter of time before your machine
|
||||
belongs to someone else. As mentioned earlier, we are going to enable `root`
|
||||
access over SSH in order to streamline the steps in this tutorial. Security is
|
||||
a tradeoff, and in this case, we are optimizing for convenience. Log on to each
|
||||
machine via SSH using your user account, then switch to the `root` user using
|
||||
the `su` command:
|
||||
|
||||
```bash
|
||||
su - root
|
||||
```
|
||||
|
||||
Edit the `/etc/ssh/sshd_config` SSH daemon configuration file and set the
|
||||
`PermitRootLogin` option to `yes`:
|
||||
|
||||
```bash
|
||||
sed -i \
|
||||
's/^#*PermitRootLogin.*/PermitRootLogin yes/' \
|
||||
/etc/ssh/sshd_config
|
||||
```
|
||||
|
||||
Restart the `sshd` SSH server to pick up the updated configuration file:
|
||||
|
||||
```bash
|
||||
systemctl restart sshd
|
||||
```
|
|
@ -11,7 +11,7 @@ ExecStart=/usr/local/bin/etcd \
|
|||
--listen-client-urls http://127.0.0.1:2379 \
|
||||
--advertise-client-urls http://127.0.0.1:2379 \
|
||||
--initial-cluster-token etcd-cluster-0 \
|
||||
--initial-cluster controller=http://127.0.0.1:2380 \
|
||||
--initial-cluster controlplane=http://127.0.0.1:2380 \
|
||||
--initial-cluster-state new \
|
||||
--data-dir=/var/lib/etcd
|
||||
Restart=on-failure
|
||||
|
|
|
@ -11,6 +11,7 @@
|
|||
# need to know so much networking setup. Also no jumpbox is included.
|
||||
INSTALL_MODE = "MANUAL"
|
||||
|
||||
BOX_IMG = "ubuntu/jammy64"
|
||||
BOOT_TIMEOUT_SEC = 120
|
||||
|
||||
# Set the build mode
|
||||
|
@ -27,7 +28,11 @@ NUM_WORKER_NODES = 2
|
|||
|
||||
# Network parameters for NAT mode
|
||||
NAT_IP_PREFIX = "192.168.56"
|
||||
JUMPER_IP_START = 10
|
||||
|
||||
JUMPER_NAME = "jumpbox"
|
||||
JUMPER_NAT_START_IP = 10
|
||||
|
||||
CONTROLPLANE_NAME = "controlplane"
|
||||
CONTROLPLANE_NAT_IP = 11
|
||||
NODE_IP_START = 20
|
||||
|
||||
|
@ -79,7 +84,7 @@ end
|
|||
|
||||
# Helper method to determine whether all nodes are up
|
||||
def all_nodes_up()
|
||||
if get_machine_id("controlplane").nil?
|
||||
if get_machine_id(CONTROLPLANE_NAME).nil?
|
||||
return false
|
||||
end
|
||||
|
||||
|
@ -89,7 +94,7 @@ def all_nodes_up()
|
|||
end
|
||||
end
|
||||
|
||||
if get_machine_id("jumpbox").nil?
|
||||
if get_machine_id(JUMPER_NAME).nil?
|
||||
return false
|
||||
end
|
||||
|
||||
|
@ -108,7 +113,7 @@ def setup_dns(node)
|
|||
node.vm.provision "setup-dns", type: "shell", :path => "ubuntu/update-dns.sh"
|
||||
end
|
||||
|
||||
# Runs provisioning steps that are required by masters and workers
|
||||
# Runs provisioning steps that are required by controlplanes and workers
|
||||
def provision_kubernetes_node(node)
|
||||
# Set up DNS
|
||||
setup_dns node
|
||||
|
@ -129,7 +134,7 @@ Vagrant.configure("2") do |config|
|
|||
# boxes at https://portal.cloud.hashicorp.com/vagrant/discover
|
||||
# config.vm.box = "base"
|
||||
|
||||
config.vm.box = "ubuntu/jammy64"
|
||||
config.vm.box = BOX_IMG
|
||||
config.vm.boot_timeout = BOOT_TIMEOUT_SEC
|
||||
|
||||
# Set SSH login user and password
|
||||
|
@ -142,20 +147,20 @@ Vagrant.configure("2") do |config|
|
|||
config.vm.box_check_update = false
|
||||
|
||||
# Provision controlplane Nodes
|
||||
config.vm.define "controlplane" do |node|
|
||||
config.vm.define CONTROLPLANE_NAME do |node|
|
||||
# Name shown in the GUI
|
||||
node.vm.provider "virtualbox" do |vb|
|
||||
vb.name = "controlplane"
|
||||
vb.name = CONTROLPLANE_NAME
|
||||
vb.memory = 2048
|
||||
vb.cpus = 2
|
||||
end
|
||||
node.vm.hostname = "controlplane"
|
||||
node.vm.hostname = CONTROLPLANE_NAME
|
||||
if BUILD_MODE == "BRIDGE"
|
||||
adapter = ""
|
||||
node.vm.network :public_network, bridge: get_bridge_adapter()
|
||||
else
|
||||
node.vm.network :private_network, ip: NAT_IP_PREFIX + ".#{CONTROLPLANE_NAT_IP}"
|
||||
node.vm.network "forwarded_port", guest: 22, host: "#{2710}"
|
||||
#node.vm.network "forwarded_port", guest: 22, host: "#{2710}"
|
||||
end
|
||||
provision_kubernetes_node node
|
||||
# Install (opinionated) configs for vim and tmux on master-1. These used by the author for CKA exam.
|
||||
|
@ -176,7 +181,7 @@ Vagrant.configure("2") do |config|
|
|||
node.vm.network :public_network, bridge: get_bridge_adapter()
|
||||
else
|
||||
node.vm.network :private_network, ip: NAT_IP_PREFIX + ".#{NODE_IP_START + i}"
|
||||
node.vm.network "forwarded_port", guest: 22, host: "#{2720 + i}"
|
||||
#node.vm.network "forwarded_port", guest: 22, host: "#{2720 + i}"
|
||||
end
|
||||
provision_kubernetes_node node
|
||||
end
|
||||
|
@ -184,20 +189,20 @@ Vagrant.configure("2") do |config|
|
|||
|
||||
if INSTALL_MODE == "MANUAL"
|
||||
# Provision a JumpBox
|
||||
config.vm.define "jumpbox" do |node|
|
||||
config.vm.define JUMPER_NAME do |node|
|
||||
# Name shown in the GUI
|
||||
node.vm.provider "virtualbox" do |vb|
|
||||
vb.name = "jumpbox"
|
||||
vb.name = JUMPER_NAME
|
||||
vb.memory = 512
|
||||
vb.cpus = 1
|
||||
end
|
||||
node.vm.hostname = "jumpbox"
|
||||
node.vm.hostname = JUMPER_NAME
|
||||
if BUILD_MODE == "BRIDGE"
|
||||
adapter = ""
|
||||
node.vm.network :public_network, bridge: get_bridge_adapter()
|
||||
else
|
||||
node.vm.network :private_network, ip: NAT_IP_PREFIX + ".#{JUMPER_IP_START}"
|
||||
node.vm.network "forwarded_port", guest: 22, host: "#{2730}"
|
||||
node.vm.network :private_network, ip: NAT_IP_PREFIX + ".#{JUMPER_NAT_START_IP}"
|
||||
#node.vm.network "forwarded_port", guest: 22, host: "#{2730}"
|
||||
end
|
||||
provision_kubernetes_node node
|
||||
end
|
||||
|
@ -214,7 +219,7 @@ Vagrant.configure("2") do |config|
|
|||
trigger.ruby do |env, machine|
|
||||
if all_nodes_up()
|
||||
puts " Gathering IP addresses of nodes..."
|
||||
nodes = ["controlplane"]
|
||||
nodes = [CONTROLPLANE_NAME]
|
||||
ips = []
|
||||
(1..NUM_WORKER_NODES).each do |i|
|
||||
nodes.push("node0#{i}")
|
||||
|
|
|
@ -0,0 +1,20 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Enable root account login
|
||||
|
||||
# If sudo with the default vagrant SSH user is acceptable, then we may not do
|
||||
# this and just update the documentation to use the vagrant user and sudo before
|
||||
# commands.
|
||||
|
||||
# Set the root user password
|
||||
echo -e "vagrant\nvagrant" | passwd root
|
||||
|
||||
# unlock the root user
|
||||
passwd -u root
|
||||
|
||||
# Enable root SSH login
|
||||
sed -i 's/^#*PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config
|
||||
|
||||
systemctl restart sshd
|
||||
|
||||
echo "root account setup script done"
|
Loading…
Reference in New Issue