mirror of
https://github.com/kelseyhightower/kubernetes-the-hard-way.git
synced 2025-12-15 17:28:58 +03:00
Remove cloud provider and move to ARM64
This commit is contained in:
committed by
Kelsey Hightower
parent
79a3f79b27
commit
a9cb5f7ba5
@@ -1,227 +1,225 @@
|
||||
# Provisioning Compute Resources
|
||||
|
||||
Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single [compute zone](https://cloud.google.com/compute/docs/regions-zones/regions-zones).
|
||||
Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the machines required for setting up a Kubernetes cluster.
|
||||
|
||||
> Ensure a default compute zone and region have been set as described in the [Prerequisites](01-prerequisites.md#set-a-default-compute-region-and-zone) lab.
|
||||
## Machine Database
|
||||
|
||||
## Networking
|
||||
This tutorial will leverage a text file, which will serve as a machine database, to store the various machine attributes that will be used when setting up the Kubernetes control plane and worker nodes. The following schema represents entries in the machine database, one entry per line:
|
||||
|
||||
The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model) assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) can limit how groups of containers are allowed to communicate with each other and external network endpoints.
|
||||
|
||||
> Setting up network policies is out of scope for this tutorial.
|
||||
|
||||
### Virtual Private Cloud Network
|
||||
|
||||
In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) (VPC) network will be setup to host the Kubernetes cluster.
|
||||
|
||||
Create the `kubernetes-the-hard-way` custom VPC network:
|
||||
|
||||
```
|
||||
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
|
||||
```text
|
||||
IPV4_ADDRESS FQDN HOSTNAME POD_SUBNET
|
||||
```
|
||||
|
||||
A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
|
||||
Each of the columns corresponds to a machine IP address `IPV4_ADDRESS`, fully qualified domain name `FQDN`, host name `HOSTNAME`, and the IP subnet `POD_SUBNET`. Kubernetes assigns one IP address per `pod` and the `POD_SUBNET` represents the unique IP address range assigned to each machine in the cluster for doing so.
|
||||
|
||||
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
|
||||
Here is an example machine database similar to the one used when creating this tutorial. Notice the IP addresses have been masked out. Your machines can be assigned any IP address as long as each machine is reachable from each other and the `jumpbox`.
|
||||
|
||||
```
|
||||
gcloud compute networks subnets create kubernetes \
|
||||
--network kubernetes-the-hard-way \
|
||||
--range 10.240.0.0/24
|
||||
```bash
|
||||
cat machines.txt
|
||||
```
|
||||
|
||||
> The `10.240.0.0/24` IP address range can host up to 254 compute instances.
|
||||
|
||||
### Firewall Rules
|
||||
|
||||
Create a firewall rule that allows internal communication across all protocols:
|
||||
|
||||
```
|
||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
|
||||
--allow tcp,udp,icmp \
|
||||
--network kubernetes-the-hard-way \
|
||||
--source-ranges 10.240.0.0/24,10.200.0.0/16
|
||||
```text
|
||||
XXX.XXX.XXX.XXX server.kubernetes.local server
|
||||
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0 10.200.0.0/24
|
||||
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1 10.200.1.0/24
|
||||
```
|
||||
|
||||
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
|
||||
|
||||
```
|
||||
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
|
||||
--allow tcp:22,tcp:6443,icmp \
|
||||
--network kubernetes-the-hard-way \
|
||||
--source-ranges 0.0.0.0/0
|
||||
```
|
||||
|
||||
> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients.
|
||||
|
||||
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
|
||||
|
||||
```
|
||||
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
|
||||
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False
|
||||
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp Fals
|
||||
```
|
||||
|
||||
### Kubernetes Public IP Address
|
||||
|
||||
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
|
||||
|
||||
```
|
||||
gcloud compute addresses create kubernetes-the-hard-way \
|
||||
--region $(gcloud config get-value compute/region)
|
||||
```
|
||||
|
||||
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
|
||||
|
||||
```
|
||||
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
|
||||
kubernetes-the-hard-way XX.XXX.XXX.XXX EXTERNAL us-west1 RESERVED
|
||||
```
|
||||
|
||||
## Compute Instances
|
||||
|
||||
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 20.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
|
||||
|
||||
### Kubernetes Controllers
|
||||
|
||||
Create three compute instances which will host the Kubernetes control plane:
|
||||
|
||||
```
|
||||
for i in 0 1 2; do
|
||||
gcloud compute instances create controller-${i} \
|
||||
--async \
|
||||
--boot-disk-size 200GB \
|
||||
--can-ip-forward \
|
||||
--image-family ubuntu-2004-lts \
|
||||
--image-project ubuntu-os-cloud \
|
||||
--machine-type e2-standard-2 \
|
||||
--private-network-ip 10.240.0.1${i} \
|
||||
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
|
||||
--subnet kubernetes \
|
||||
--tags kubernetes-the-hard-way,controller
|
||||
done
|
||||
```
|
||||
|
||||
### Kubernetes Workers
|
||||
|
||||
Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The `pod-cidr` instance metadata will be used to expose pod subnet allocations to compute instances at runtime.
|
||||
|
||||
> The Kubernetes cluster CIDR range is defined by the Controller Manager's `--cluster-cidr` flag. In this tutorial the cluster CIDR range will be set to `10.200.0.0/16`, which supports 254 subnets.
|
||||
|
||||
Create three compute instances which will host the Kubernetes worker nodes:
|
||||
|
||||
```
|
||||
for i in 0 1 2; do
|
||||
gcloud compute instances create worker-${i} \
|
||||
--async \
|
||||
--boot-disk-size 200GB \
|
||||
--can-ip-forward \
|
||||
--image-family ubuntu-2004-lts \
|
||||
--image-project ubuntu-os-cloud \
|
||||
--machine-type e2-standard-2 \
|
||||
--metadata pod-cidr=10.200.${i}.0/24 \
|
||||
--private-network-ip 10.240.0.2${i} \
|
||||
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
|
||||
--subnet kubernetes \
|
||||
--tags kubernetes-the-hard-way,worker
|
||||
done
|
||||
```
|
||||
|
||||
### Verification
|
||||
|
||||
List the compute instances in your default compute zone:
|
||||
|
||||
```
|
||||
gcloud compute instances list --filter="tags.items=kubernetes-the-hard-way"
|
||||
```
|
||||
|
||||
> output
|
||||
|
||||
```
|
||||
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
|
||||
controller-0 us-west1-c e2-standard-2 10.240.0.10 XX.XX.XX.XXX RUNNING
|
||||
controller-1 us-west1-c e2-standard-2 10.240.0.11 XX.XXX.XXX.XX RUNNING
|
||||
controller-2 us-west1-c e2-standard-2 10.240.0.12 XX.XXX.XX.XXX RUNNING
|
||||
worker-0 us-west1-c e2-standard-2 10.240.0.20 XX.XX.XXX.XXX RUNNING
|
||||
worker-1 us-west1-c e2-standard-2 10.240.0.21 XX.XX.XX.XXX RUNNING
|
||||
worker-2 us-west1-c e2-standard-2 10.240.0.22 XX.XXX.XX.XX RUNNING
|
||||
```
|
||||
Now it's your turn to create a `machines.txt` file with the details for the three machines you will be using to create your Kubernetes cluster. Use the example machine database from above and add the details for your machines.
|
||||
|
||||
## Configuring SSH Access
|
||||
|
||||
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as described in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
|
||||
SSH will be used to configure the machines in the cluster. Verify that you have `root` SSH access to each machine listed in your machine database. You may need to enable root SSH access on each node by updating the sshd_config file and restarting the SSH server.
|
||||
|
||||
Test SSH access to the `controller-0` compute instances:
|
||||
### Enable root SSH Access
|
||||
|
||||
```
|
||||
gcloud compute ssh controller-0
|
||||
If `root` SSH access is enabled for each of your machines you can skip this section.
|
||||
|
||||
By default, a new `debian` install disables SSH access for the `root` user. This is done for security reasons as the `root` user is a well known user on Linux systems, and if a weak password is used on a machine connected to the internet, well, let's just say it's only a matter of time before your machine belongs to someone else. As mention earlier, we are going to enable `root` access over SSH in order to streamline the steps in this tutorial. Security is a tradeoff, and in this case, we are optimizing for convenience. On each machine login via SSH using your user account, then switch to the `root` user using the `su` command:
|
||||
|
||||
```bash
|
||||
su - root
|
||||
```
|
||||
|
||||
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue:
|
||||
Edit the `/etc/ssh/sshd_config` SSH daemon configuration file and the `PermitRootLogin` option to `yes`:
|
||||
|
||||
```bash
|
||||
sed -i \
|
||||
's/^#PermitRootLogin.*/PermitRootLogin yes/' \
|
||||
/etc/ssh/sshd_config
|
||||
```
|
||||
WARNING: The public SSH key file for gcloud does not exist.
|
||||
WARNING: The private SSH key file for gcloud does not exist.
|
||||
WARNING: You do not have an SSH key for gcloud.
|
||||
WARNING: SSH keygen will be executed to generate a key.
|
||||
|
||||
Restart the `sshd` SSH server to pick up the updated configuration file:
|
||||
|
||||
```bash
|
||||
systemctl restart sshd
|
||||
```
|
||||
|
||||
### Generate and Distribute SSH Keys
|
||||
|
||||
In this section you will generate and distribute an SSH keypair to the `server`, `node-0`, and `node-1`, machines, which will be used to run commands on those machines throughout this tutorial. Run the following commands from the `jumpbox` machine.
|
||||
|
||||
Generate a new SSH key:
|
||||
|
||||
```bash
|
||||
ssh-keygen
|
||||
```
|
||||
|
||||
```text
|
||||
Generating public/private rsa key pair.
|
||||
Enter passphrase (empty for no passphrase):
|
||||
Enter same passphrase again:
|
||||
Enter file in which to save the key (/root/.ssh/id_rsa):
|
||||
Enter passphrase (empty for no passphrase):
|
||||
Enter same passphrase again:
|
||||
Your identification has been saved in /root/.ssh/id_rsa
|
||||
Your public key has been saved in /root/.ssh/id_rsa.pub
|
||||
```
|
||||
|
||||
At this point the generated SSH keys will be uploaded and stored in your project:
|
||||
Copy the SSH public key to each machine:
|
||||
|
||||
```
|
||||
Your identification has been saved in /home/$USER/.ssh/google_compute_engine.
|
||||
Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub.
|
||||
The key fingerprint is:
|
||||
SHA256:nz1i8jHmgQuGt+WscqP5SeIaSy5wyIJeL71MuV+QruE $USER@$HOSTNAME
|
||||
The key's randomart image is:
|
||||
+---[RSA 2048]----+
|
||||
| |
|
||||
| |
|
||||
| |
|
||||
| . |
|
||||
|o. oS |
|
||||
|=... .o .o o |
|
||||
|+.+ =+=.+.X o |
|
||||
|.+ ==O*B.B = . |
|
||||
| .+.=EB++ o |
|
||||
+----[SHA256]-----+
|
||||
Updating project ssh metadata...-Updated [https://www.googleapis.com/compute/v1/projects/$PROJECT_ID].
|
||||
Updating project ssh metadata...done.
|
||||
Waiting for SSH key to propagate.
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
ssh-copy-id root@${IP}
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
|
||||
Once each key is added, verify SSH public key access is working:
|
||||
|
||||
```
|
||||
Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-1042-gcp x86_64)
|
||||
...
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
ssh -n root@${IP} uname -o -m
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
Type `exit` at the prompt to exit the `controller-0` compute instance:
|
||||
```text
|
||||
aarch64 GNU/Linux
|
||||
aarch64 GNU/Linux
|
||||
aarch64 GNU/Linux
|
||||
```
|
||||
|
||||
```
|
||||
$USER@controller-0:~$ exit
|
||||
```
|
||||
> output
|
||||
## Hostnames
|
||||
|
||||
In this section you will assign hostnames to the `server`, `node-0`, and `node-1` machines. The hostname will be used when executing commands from the `jumpbox` to each machine. The hostname also play a major role within the cluster. Instead of Kubernetes clients using an IP address to issue commands to the Kubernetes API server, those client will use the `server` hostname instead. Hostnames are also used by each worker machine, `node-0` and `node-1` when registering with a given Kubernetes cluster.
|
||||
|
||||
To configure the hostname for each machine, run the following commands on the `jumpbox`.
|
||||
|
||||
Set the hostname on each machine listed in the `machines.txt` file:
|
||||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
CMD="sed -i 's/^127.0.1.1.*/127.0.1.1\t${FQDN} ${HOST}/' /etc/hosts"
|
||||
ssh -n root@${IP} "$CMD"
|
||||
ssh -n root@${IP} hostnamectl hostname ${HOST}
|
||||
done < machines.txt
|
||||
```
|
||||
logout
|
||||
Connection to XX.XX.XX.XXX closed
|
||||
|
||||
Verify the hostname is set on each machine:
|
||||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
ssh -n root@${IP} hostname --fqdn
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
```text
|
||||
server.kubernetes.local
|
||||
node-0.kubernetes.local
|
||||
node-1.kubernetes.local
|
||||
```
|
||||
|
||||
## DNS
|
||||
|
||||
In this section you will generate a DNS `hosts` file which will be appended to `jumpbox` local `/etc/hosts` file and to the `/etc/hosts` file of all three machines used for this tutorial. This will allow each machine to be reachable using a hostname such as `server`, `node-0`, or `node-1`.
|
||||
|
||||
Create a new `hosts` file and add a header to identify the machines being added:
|
||||
|
||||
```bash
|
||||
echo "" > hosts
|
||||
echo "# Kubernetes The Hard Way" >> hosts
|
||||
```
|
||||
|
||||
Generate a DNS entry for each machine in the `machines.txt` file and append it to the `hosts` file:
|
||||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
ENTRY="${IP} ${FQDN} ${HOST}"
|
||||
echo $ENTRY >> hosts
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
Review the DNS entries in the `hosts` file:
|
||||
|
||||
```bash
|
||||
cat hosts
|
||||
```
|
||||
|
||||
```text
|
||||
|
||||
# Kubernetes The Hard Way
|
||||
XXX.XXX.XXX.XXX server.kubernetes.local server
|
||||
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0
|
||||
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1
|
||||
```
|
||||
|
||||
## Adding DNS Entries To A Local Machine
|
||||
|
||||
In this section you will append the DNS entries from the `hosts` file to the local `/etc/hosts` file on your `jumpbox` machine.
|
||||
|
||||
Append the DNS entries from `hosts` to `/etc/hosts`:
|
||||
|
||||
```bash
|
||||
cat hosts >> /etc/hosts
|
||||
```
|
||||
|
||||
Verify that the `/etc/hosts` file has been updated:
|
||||
|
||||
```bash
|
||||
cat /etc/hosts
|
||||
```
|
||||
|
||||
```text
|
||||
127.0.0.1 localhost
|
||||
127.0.1.1 jumpbox
|
||||
|
||||
# The following lines are desirable for IPv6 capable hosts
|
||||
::1 localhost ip6-localhost ip6-loopback
|
||||
ff02::1 ip6-allnodes
|
||||
ff02::2 ip6-allrouters
|
||||
|
||||
|
||||
|
||||
# Kubernetes The Hard Way
|
||||
XXX.XXX.XXX.XXX server.kubernetes.local server
|
||||
XXX.XXX.XXX.XXX node-0.kubernetes.local node-0
|
||||
XXX.XXX.XXX.XXX node-1.kubernetes.local node-1
|
||||
```
|
||||
|
||||
At this point you should be able to SSH to each machine listed in the `machines.txt` file using a hostname.
|
||||
|
||||
```bash
|
||||
for host in server node-0 node-1
|
||||
do ssh root@${host} uname -o -m -n
|
||||
done
|
||||
```
|
||||
|
||||
```text
|
||||
server aarch64 GNU/Linux
|
||||
node-0 aarch64 GNU/Linux
|
||||
node-1 aarch64 GNU/Linux
|
||||
```
|
||||
|
||||
## Adding DNS Entries To The Remote Machines
|
||||
|
||||
In this section you will append the DNS entries from `hosts` to `/etc/hosts` on each machine listed in the `machines.txt` text file.
|
||||
|
||||
Copy the `hosts` file to each machine and append the contents to `/etc/hosts`:
|
||||
|
||||
```bash
|
||||
while read IP FQDN HOST SUBNET; do
|
||||
scp hosts root@${HOST}:~/
|
||||
ssh -n \
|
||||
root@${HOST} "cat hosts >> /etc/hosts"
|
||||
done < machines.txt
|
||||
```
|
||||
|
||||
At this point hostnames can be used when connecting to machines from your `jumpbox` machine, or any of the three machines in the Kubernetes cluster. Instead of using IP addresess you can now connect to machines using a hostname such as `server`, `node-0`, or `node-1`.
|
||||
|
||||
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)
|
||||
|
||||
Reference in New Issue
Block a user