Initial translation to OCI

pull/637/head
Dan Simone 2021-02-04 22:40:21 -08:00
parent ca96371e4d
commit ee9a06afab
14 changed files with 481 additions and 321 deletions

View File

@ -1,4 +1,9 @@
# Kubernetes The Hard Way # Kubernetes The Hard Way on OCI
This is a translation of the extremely useful [Kubernetes the Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way)
by Kelsey Hightower, using [Oracle Cloud Infrastructure](https://www.oracle.com/cloud/) instead of Google Cloud Platform.
*****
This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](https://kubernetes.io/docs/setup). This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](https://kubernetes.io/docs/setup).
@ -27,7 +32,7 @@ Kubernetes The Hard Way guides you through bootstrapping a highly available Kube
## Labs ## Labs
This tutorial assumes you have access to the [Google Cloud Platform](https://cloud.google.com). While GCP is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms. This tutorial assumes you have access to [OCI](https://www.oracle.com/cloud/). While OCI is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms.
* [Prerequisites](docs/01-prerequisites.md) * [Prerequisites](docs/01-prerequisites.md)
* [Installing the Client Tools](docs/02-client-tools.md) * [Installing the Client Tools](docs/02-client-tools.md)

View File

@ -1,54 +1,75 @@
# Prerequisites # Prerequisites
## Google Cloud Platform ## Oracle Cloud Infrastructure
This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits. This tutorial leverages [OCI](https://www.oracle.com/cloud/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://www.oracle.com/cloud/free/) for $300 in free credits.
[Estimated cost](https://cloud.google.com/products/calculator#id=873932bc-0840-4176-b0fa-a8cfd4ca61ae) to run this tutorial: $0.23 per hour ($5.50 per day). [Estimated cost](https://www.oracle.com/cloud/cost-estimator.html) to run this tutorial: $0.38 per hour ($9.23 per day).
> The compute resources required for this tutorial exceed the Google Cloud Platform free tier. > The compute resources required for this tutorial exceed the OCI free tier.
## Google Cloud Platform SDK ## OCI CLI
### Install the Google Cloud SDK ### Install the OCI SDK
Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility. Follow the OCI CLI [documentation](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm) to install and configure the `oci` command line utility.
Verify the Google Cloud SDK version is 301.0.0 or higher: Verify the OCI CLI version is 2.17.0 or higher:
``` ```
gcloud version oci --version
``` ```
### Set a Default Compute Region and Zone ### Capture OCIDs and Generate Required Keys
This tutorial assumes a default compute region and zone have been configured. Follow the documentation [here](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/apisigningkey.htm) to fetch your tenancy and user OCIDs and generate an RSA key pair, which are necessary to use the OCI CLI.
If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this: ### Create OCI Config File
Follow the documentation [here](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm) to create an OCI config file `~/.oci/config`. Here's an example of what it will look like:
``` ```
gcloud init [DEFAULT]
user=ocid1.user.oc1..<unique_ID>
fingerprint=<your_fingerprint>
key_file=~/.oci/oci_api_key.pem
tenancy=ocid1.tenancy.oc1..<unique_ID>
region=us-ashburn-1
``` ```
Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials: ### Set a Default Region
The above example uses "us-ashburn-1" as the region, but you can replace this with any available region. For best
performance running the commands from this tutorial, pick a region close to your physical location. To list
the available regions:
``` ```
gcloud auth login oci iam region list
``` ```
Next set a default compute region and compute zone: ### Create a Compartment
Create yourself an OCI compartment, within which we'll create all the resources in this tutorial. In the
following command, you will need to fill in your tenancy OCID and your OCI [Home Region](https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managingregions.htm#The).
Your Home Region will be indicated to you when you first create your tenancy. You can also determine it
like [this](https://docs.oracle.com/en-us/iaas/Content/GSG/Reference/faq.htm#How).
``` ```
gcloud config set compute/region us-west1 oci iam compartment create --name kubernetes-the-hard-way --description "Kubernetes the Hard Way" \
--compartment-id <tenancy_ocid> --region <home_region>
``` ```
Set a default compute zone: ### Set this Compartment as the Default
Note the compartment `id` from the output of the above command, and create a file `~/.oci/oci_cli_rc` with
the following content:
``` ```
gcloud config set compute/zone us-west1-c [DEFAULT]
compartment-id=<compartment_id>
``` ```
> Use the `gcloud compute zones list` command to view additional regions and zones. From this point on, all `oci` commands we run will target the above compartment.
## Running Commands in Parallel with tmux ## Running Commands in Parallel with tmux

View File

@ -1,7 +1,44 @@
# Installing the Client Tools # Installing the Client Tools
In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl). In this lab you will install the command line utilities required to complete this tutorial: [jq](https://stedolan.github.io/jq/download/), [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl), and set up a few shell functions.
## Install jq
Install jq:
### OS X
```
curl -o jq -L https://github.com/stedolan/jq/releases/download/jq-1.6/jq-osx-amd64
```
```
chmod +x jq
```
```
sudo mv jq /usr/local/bin/
```
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
```
brew install jq
```
### Linux
```
curl -o jq -L https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
```
```
chmod +x jq
```
```
sudo mv jq /usr/local/bin/
```
## Install CFSSL ## Install CFSSL
@ -115,4 +152,34 @@ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
``` ```
## Shell Helper Functions
In your terminal, run the following to define a few shell helper functions that we'll use throughout the tutorial:
```
function oci-ssh(){
# Helper function to ssh into a named OCI compute instance
if [ -z "$1" ]
then
echo "Usage: oci-ssh <compute_instance_name> <optional_command>"
else
ocid=$(oci compute instance list --lifecycle-state RUNNING --display-name $1 | jq -r .data[0].id)
ip=$(oci compute instance list-vnics --instance-id $ocid | jq -r '.data[0]["public-ip"]')
ssh -i kubernetes_ssh_rsa ubuntu@$ip $2
fi
}
function oci-scp(){
# Helper function to scp a set of local files to a named OCI compute instance
if [ -z "$3" ]
then
echo "Usage: oci-scp <local_file_list> <compute_instance_name> <destination>"
else
ocid=$(oci compute instance list --lifecycle-state RUNNING --display-name ${@: (-2):1} | jq -r .data[0].id)
ip=$(oci compute instance list-vnics --instance-id $ocid | jq -r '.data[0]["public-ip"]')
scp -i kubernetes_ssh_rsa "${@:1:$#-2}" ubuntu@$ip:${@: -1}
fi
}
```
Next: [Provisioning Compute Resources](03-compute-resources.md) Next: [Provisioning Compute Resources](03-compute-resources.md)

View File

@ -10,107 +10,81 @@ The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-ad
> Setting up network policies is out of scope for this tutorial. > Setting up network policies is out of scope for this tutorial.
### Virtual Private Cloud Network ### Virtual Cloud Network
In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) (VPC) network will be setup to host the Kubernetes cluster. In this section a dedicated [Virtual Cloud Network](https://www.oracle.com/cloud/networking/virtual-cloud-network/) (VCN) network will be setup to host the Kubernetes cluster.
Create the `kubernetes-the-hard-way` custom VPC network: Create the `kubernetes-the-hard-way` custom VCN:
``` ```
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom VCN_ID=$(oci network vcn create --display-name kubernetes-the-hard-way --dns-label vcn --cidr-block 10.240.0.0/24 | jq -r .data.id)
``` ```
A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster. A [subnet](https://docs.oracle.com/en-us/iaas/Content/Network/Tasks/managingVCNs_topic-Overview_of_VCNs_and_Subnets.htm#Overview) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VCN, along with a Route Table and Internet Gateway allowing traffic to the internet.
``` ```
gcloud compute networks subnets create kubernetes \ INTERNET_GATEWAY_ID=$(oci network internet-gateway create --vcn-id $VCN_ID --is-enabled true \
--network kubernetes-the-hard-way \ --display-name kubernetes-the-hard-way | jq -r .data.id)
--range 10.240.0.0/24 ROUTE_TABLE_ID=$(oci network route-table create --vcn-id $VCN_ID --display-name kubernetes-the-hard-way \
--route-rules "[{\"cidrBlock\":\"0.0.0.0/0\",\"networkEntityId\":\"$INTERNET_GATEWAY_ID\"}]" | jq -r .data.id)
SUBNET_ID=$(oci network subnet create --display-name kubernetes --dns-label subnet --vcn-id $VCN_ID \
--cidr-block 10.240.0.0/24 --route-table-id $ROUTE_TABLE_ID | jq -r .data.id)
``` ```
> The `10.240.0.0/24` IP address range can host up to 254 compute instances. > The `10.240.0.0/24` IP address range can host up to 254 compute instances.
### Firewall Rules :warning: **Note**: For simplicity and to stay close to the original kubernetes-the-hard-way, we will be using a single subnet, shared between the Kubernetes worker nodes, controller plane nodes, and LoadBalancer. A production-caliber setup would consist of at least:
- A dedicated public subnet for the public LoadBalancer.
Create a firewall rule that allows internal communication across all protocols: - A dedicated private subnet for controller plan nodes.
- A dedicated private subnet for worker nodes. This setup would not allow NodePort access to services.
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
--allow tcp,udp,icmp \
--network kubernetes-the-hard-way \
--source-ranges 10.240.0.0/24,10.200.0.0/16
```
Create a firewall rule that allows external SSH, ICMP, and HTTPS:
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
--allow tcp:22,tcp:6443,icmp \
--network kubernetes-the-hard-way \
--source-ranges 0.0.0.0/0
```
> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients.
List the firewall rules in the `kubernetes-the-hard-way` VPC network:
```
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
```
> output
```
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp Fals
```
### Kubernetes Public IP Address
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
```
gcloud compute addresses create kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region)
```
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
```
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
```
> output
```
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
kubernetes-the-hard-way XX.XXX.XXX.XXX EXTERNAL us-west1 RESERVED
```
## Compute Instances ## Compute Instances
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 20.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process. The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 20.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
:warning: **Note**: For simplicity in this tutorial, we will be accessing controller and worker nodes over SSH, using public addresses. A production-caliber setup would instead run controller and worker nodes in _private_ subnets, with any direct SSH access done via [Bastions](https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/bastion-hosts.pdf) when required.
### Create SSH Keys
Generate an RSA key pair, which we'll use for SSH access to our compute nodes:
```
ssh-keygen -b 2048 -t rsa -f kubernetes_ssh_rsa
```
Enter a passphrase at the prompt to continue:
```
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
```
Results:
```
kubernetes_ssh_rsa
kubernetes_ssh_rsa.pub
```
### Kubernetes Controllers ### Kubernetes Controllers
Create three compute instances which will host the Kubernetes control plane: Create three compute instances which will host the Kubernetes control plane:
``` ```
IMAGE_ID=$(oci compute image list --operating-system "Canonical Ubuntu" --operating-system-version "20.04" | jq -r .data[0].id)
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute instances create controller-${i} \ # Rudimentary spreading of nodes across Availability Domains and Fault Domains
--async \ NUM_ADS=$(oci iam availability-domain list | jq -r .data | jq length)
--boot-disk-size 200GB \ AD_NAME=$(oci iam availability-domain list | jq -r .data[$((i % NUM_ADS))].name)
--can-ip-forward \ NUM_FDS=$(oci iam fault-domain list --availability-domain $AD_NAME | jq -r .data | jq length)
--image-family ubuntu-2004-lts \ FD_NAME=$(oci iam fault-domain list --availability-domain $AD_NAME | jq -r .data[$((i % NUM_FDS))].name)
--image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \ oci compute instance launch --display-name controller-${i} --assign-public-ip true \
--private-network-ip 10.240.0.1${i} \ --subnet-id $SUBNET_ID --shape VM.Standard.E3.Flex --availability-domain $AD_NAME \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --fault-domain $FD_NAME --image-id $IMAGE_ID --shape-config '{"memoryInGBs": 8.0, "ocpus": 2.0}' \
--subnet kubernetes \ --private-ip 10.240.0.1${i} --freeform-tags \'{"project": "kubernetes-the-hard-way","role":"controller"}' \
--tags kubernetes-the-hard-way,controller --metadata "{\"ssh_authorized_keys\":\"$(cat kubernetes_ssh_rsa.pub)\"}"
done done
``` ```
@ -123,99 +97,86 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste
Create three compute instances which will host the Kubernetes worker nodes: Create three compute instances which will host the Kubernetes worker nodes:
``` ```
IMAGE_ID=$(oci compute image list --operating-system "Canonical Ubuntu" --operating-system-version "20.04" | jq -r .data[0].id)
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute instances create worker-${i} \ # Rudimentary spreading of nodes across Availability Domains and Fault Domains
--async \ NUM_ADS=$(oci iam availability-domain list | jq -r .data | jq length)
--boot-disk-size 200GB \ AD_NAME=$(oci iam availability-domain list | jq -r .data[$((i % NUM_ADS))].name)
--can-ip-forward \ NUM_FDS=$(oci iam fault-domain list --availability-domain $AD_NAME | jq -r .data | jq length)
--image-family ubuntu-2004-lts \ FD_NAME=$(oci iam fault-domain list --availability-domain $AD_NAME | jq -r .data[$((i % NUM_FDS))].name)
--image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \ oci compute instance launch --display-name worker-${i} --assign-public-ip true \
--metadata pod-cidr=10.200.${i}.0/24 \ --subnet-id $SUBNET_ID --shape VM.Standard.E3.Flex --availability-domain $AD_NAME \
--private-network-ip 10.240.0.2${i} \ --fault-domain $FD_NAME --image-id $IMAGE_ID --shape-config '{"memoryInGBs": 8.0, "ocpus": 2.0}' \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --private-ip 10.240.0.2${i} --freeform-tags '{"project": "kubernetes-the-hard-way","role":"worker"}' \
--subnet kubernetes \ --metadata "{\"ssh_authorized_keys\":\"$(cat kubernetes_ssh_rsa.pub)\",\"pod-cidr\":\"10.200.${i}.0/24\"}" \
--tags kubernetes-the-hard-way,worker --skip-source-dest-check true
done done
``` ```
### Verification ### Verification
List the compute instances in your default compute zone: List the compute instances in our compartment:
``` ```
gcloud compute instances list --filter="tags.items=kubernetes-the-hard-way" oci compute instance list --sort-by DISPLAYNAME --lifecycle-state RUNNING --all | jq -r .data[] | jq '{"display-name","lifecycle-state"}'
``` ```
> output > output
``` ```
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS {
controller-0 us-west1-c e2-standard-2 10.240.0.10 XX.XX.XX.XXX RUNNING "display-name": "controller-0",
controller-1 us-west1-c e2-standard-2 10.240.0.11 XX.XXX.XXX.XX RUNNING "lifecycle-state": "RUNNING"
controller-2 us-west1-c e2-standard-2 10.240.0.12 XX.XXX.XX.XXX RUNNING }
worker-0 us-west1-c e2-standard-2 10.240.0.20 XX.XX.XXX.XXX RUNNING {
worker-1 us-west1-c e2-standard-2 10.240.0.21 XX.XX.XX.XXX RUNNING "display-name": "controller-1",
worker-2 us-west1-c e2-standard-2 10.240.0.22 XX.XXX.XX.XX RUNNING "lifecycle-state": "RUNNING"
}
{
"display-name": "controller-2",
"lifecycle-state": "RUNNING"
}
{
"display-name": "worker-0",
"lifecycle-state": "RUNNING"
}
{
"display-name": "worker-1",
"lifecycle-state": "RUNNING"
}
{
"display-name": "worker-2",
"lifecycle-state": "RUNNING"
}
``` ```
## Configuring SSH Access Rerun the above command until all of the compute instances we created are listed above as "Running".
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as described in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation. ## Verifying SSH Access
Test SSH access to the `controller-0` compute instances: Our subnet was created with a default Security List that allows public SSH access, so we can verify at this point that SSH is working:
``` ```
gcloud compute ssh controller-0 oci-ssh controller-0
``` ```
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue: The first time SSHing into a node, you'll see something like the following, at which point enter "yes":
``` ```
WARNING: The public SSH key file for gcloud does not exist. The authenticity of host 'XX.XX.XX.XXX (XX.XX.XX.XXX )' can't be established.
WARNING: The private SSH key file for gcloud does not exist. ECDSA key fingerprint is SHA256:xxxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxx.
WARNING: You do not have an SSH key for gcloud. Are you sure you want to continue connecting (yes/no/[fingerprint])?
WARNING: SSH keygen will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
``` ```
At this point the generated SSH keys will be uploaded and stored in your project:
``` ```
Your identification has been saved in /home/$USER/.ssh/google_compute_engine. Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-1029-oracle x86_64)
Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub.
The key fingerprint is:
SHA256:nz1i8jHmgQuGt+WscqP5SeIaSy5wyIJeL71MuV+QruE $USER@$HOSTNAME
The key's randomart image is:
+---[RSA 2048]----+
| |
| |
| |
| . |
|o. oS |
|=... .o .o o |
|+.+ =+=.+.X o |
|.+ ==O*B.B = . |
| .+.=EB++ o |
+----[SHA256]-----+
Updating project ssh metadata...-Updated [https://www.googleapis.com/compute/v1/projects/$PROJECT_ID].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
```
After the SSH keys have been updated you'll be logged into the `controller-0` instance:
```
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-1019-gcp x86_64)
... ...
``` ```
Type `exit` at the prompt to exit the `controller-0` compute instance: Type `exit` at the prompt to exit the `controller-0` compute instance:
``` ```
$USER@controller-0:~$ exit ubuntu@controller-0:~$ exit
``` ```
> output > output
@ -224,4 +185,121 @@ logout
Connection to XX.XX.XX.XXX closed Connection to XX.XX.XX.XXX closed
``` ```
### Security Lists
For use in later steps of the tutorial, we'll create Security Lists to allow:
- Intra-VCN communication between worker and controller nodes.
- Public access to the NodePort range.
- Public access to the LoadBalancer port.
```
INTRA_VCN_SECURITY_LIST_ID=$(oci network security-list create --vcn-id $VCN_ID --display-name intra-vcn --ingress-security-rules '[
{
"icmp-options": null,
"is-stateless": true,
"protocol": "all",
"source": "10.240.0.0/24",
"source-type": "CIDR_BLOCK",
"tcp-options": null,
"udp-options": null
}]' --egress-security-rules '[]' | jq -r .data.id)
WORKER_SECURITY_LIST_ID=$(oci network security-list create --vcn-id $VCN_ID --display-name worker --ingress-security-rules '[
{
"icmp-options": null,
"is-stateless": false,
"protocol": "6",
"source": "0.0.0.0/0",
"source-type": "CIDR_BLOCK",
"tcp-options": {
"destination-port-range": {
"max": 32767,
"min": 30000
},
"source-port-range": null
},
"udp-options": null
}]' --egress-security-rules '[]' | jq -r .data.id)
LB_SECURITY_LIST_ID=$(oci network security-list create --vcn-id $VCN_ID --display-name load-balancer --ingress-security-rules '[
{
"icmp-options": null,
"is-stateless": false,
"protocol": "6",
"source": "0.0.0.0/0",
"source-type": "CIDR_BLOCK",
"tcp-options": {
"destination-port-range": {
"max": 6443,
"min": 6443
},
"source-port-range": null
},
"udp-options": null
}]' --egress-security-rules '[]' | jq -r .data.id)
```
We'll add these Security Lists to our subnet:
```
DEFAULT_SECURITY_LIST_ID=$(oci network security-list list --display-name "Default Security List for kubernetes-the-hard-way" | jq -r .data[0].id)
oci network subnet update --subnet-id $SUBNET_ID --security-list-ids \
"[\"$DEFAULT_SECURITY_LIST_ID\",\"$INTRA_VCN_SECURITY_LIST_ID\",\"$WORKER_SECURITY_LIST_ID\",\"$LB_SECURITY_LIST_ID\"]" --force
```
### Firewall Rules
And similarly, we'll open up the firewall of the worker and controller nodes to allow intra-VCN traffic.
```
for instance in controller-0 controller-1 controller-2; do
oci-ssh ${instance} "sudo ufw allow from 10.240.0.0/24;sudo iptables -A INPUT -i ens3 -s 10.240.0.0/24 -j ACCEPT;sudo iptables -F"
done
for instance in worker-0 worker-1 worker-2; do
oci-ssh ${instance} "sudo ufw allow from 10.240.0.0/24;sudo iptables -A INPUT -i ens3 -s 10.240.0.0/24 -j ACCEPT;sudo iptables -F"
done
```
### Provision a Network Load Balancer
> An [OCI Load Balancer](https://docs.oracle.com/en-us/iaas/Content/Balance/Concepts/balanceoverview.htm) will be used to expose the Kubernetes API Servers to remote clients.
Create the Load Balancer:
```
LOADBALANCER_ID=$(oci lb load-balancer create --display-name kubernetes-the-hard-way \
--shape-name 100Mbps --wait-for-state SUCCEEDED --subnet-ids "[\"$SUBNET_ID\"]" | jq -r .data.id)
```
Create a Backend Set, with Backends for the our 3 controller nodes:
```
cat > backends.json <<EOF
[
{
"ipAddress": "10.240.0.10",
"port": 6443,
"weight": 1
},
{
"ipAddress": "10.240.0.11",
"port": 6443,
"weight": 1
},
{
"ipAddress": "10.240.0.12",
"port": 6443,
"weight": 1
}
]
EOF
oci lb backend-set create --name controller-backend-set --load-balancer-id $LOADBALANCER_ID --backends file://backends.json \
--health-checker-interval-in-ms 10000 --health-checker-port 8888 --health-checker-protocol HTTP \
--health-checker-retries 3 --health-checker-return-code 200 --health-checker-timeout-in-ms 3000 \
--health-checker-url-path "/healthz" --policy "ROUND_ROBIN" --wait-for-state SUCCEEDED
oci lb listener create --name controller-listener --default-backend-set-name controller-backend-set \
--port 6443 --protocol TCP --load-balancer-id $LOADBALANCER_ID --wait-for-state SUCCEEDED
```
At this point, the Load Balancer will be shown as in a "Critical" state. This will be case until we configure the API server on the controller nodes in subsequent steps.
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md) Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)

View File

@ -115,7 +115,7 @@ Generate a certificate and private key for each Kubernetes worker node:
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF cat > ${instance}-csr.json <<EOF
{ {
"CN": "system:node:${instance}", "CN": "system:node:${instance}.subnet.vcn.oraclevcn.com",
"key": { "key": {
"algo": "rsa", "algo": "rsa",
"size": 2048 "size": 2048
@ -132,17 +132,14 @@ cat > ${instance}-csr.json <<EOF
} }
EOF EOF
EXTERNAL_IP=$(gcloud compute instances describe ${instance} \ EXTERNAL_IP=$(oci compute instance list-vnics --instance-id $ocid | jq -r '.data[0]["public-ip"]')
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)') INTERNAL_IP=$(oci compute instance list-vnics --instance-id $ocid | jq -r '.data[0]["private-ip"]')
INTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].networkIP)')
cfssl gencert \ cfssl gencert \
-ca=ca.pem \ -ca=ca.pem \
-ca-key=ca-key.pem \ -ca-key=ca-key.pem \
-config=ca-config.json \ -config=ca-config.json \
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \ -hostname=${instance},${instance}.subnet.vcn.oraclevcn.com,${EXTERNAL_IP},${INTERNAL_IP} \
-profile=kubernetes \ -profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance} ${instance}-csr.json | cfssljson -bare ${instance}
done done
@ -292,17 +289,13 @@ kube-scheduler.pem
### The Kubernetes API Server Certificate ### The Kubernetes API Server Certificate
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients. The `kubernetes-the-hard-way` Load Balancer public IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
Generate the Kubernetes API Server certificate and private key: Generate the Kubernetes API Server certificate and private key:
``` ```
{ {
KUBERNETES_PUBLIC_ADDRESS=$(oci lb load-balancer list --all | jq '.data[] | select(."display-name"=="kubernetes-the-hard-way")' | jq -r '."ip-addresses"[0]."ip-address"')
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
cat > kubernetes-csr.json <<EOF cat > kubernetes-csr.json <<EOF
@ -396,7 +389,7 @@ Copy the appropriate certificates and private keys to each worker instance:
``` ```
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/ oci-scp ca.pem ${instance}-key.pem ${instance}.pem ${instance} '~/'
done done
``` ```
@ -404,8 +397,8 @@ Copy the appropriate certificates and private keys to each controller instance:
``` ```
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ oci-scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem ${instance}:~/ service-account-key.pem service-account.pem ${instance} '~/'
done done
``` ```

View File

@ -10,12 +10,10 @@ In this section you will generate kubeconfig files for the `controller manager`,
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used. Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
Retrieve the `kubernetes-the-hard-way` static IP address: Retrieve the `kubernetes-the-hard-way` Load Balancer public IP address:
``` ```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(oci lb load-balancer list --all | jq '.data[] | select(."display-name"=="kubernetes-the-hard-way")' | jq -r '."ip-addresses"[0]."ip-address"')
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
``` ```
### The kubelet Kubernetes Configuration File ### The kubelet Kubernetes Configuration File
@ -199,7 +197,7 @@ Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker
``` ```
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ oci-scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance} '~/'
done done
``` ```
@ -207,7 +205,7 @@ Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig f
``` ```
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/ oci-scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance} '~/'
done done
``` ```

View File

@ -36,7 +36,7 @@ Copy the `encryption-config.yaml` encryption config file to each controller inst
``` ```
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/ oci-scp encryption-config.yaml ${instance} '~/'
done done
``` ```

View File

@ -4,16 +4,31 @@ Kubernetes components are stateless and store cluster state in [etcd](https://gi
## Prerequisites ## Prerequisites
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using our `oci-ssh` shell function. Example:
``` ```
gcloud compute ssh controller-0 oci-ssh controller-0
``` ```
### Running commands in parallel with tmux ### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab. [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
**Note**: please import the shell functions defined [here](02-client-tools.md#shell-functions) in each tmux window/pane, as we will make use of them.
## Required Tools
### Install JQ
Install JQ on each controller instance:
```
{
sudo apt-get update
sudo apt-get -y install jq
}
```
## Bootstrapping an etcd Cluster Member ## Bootstrapping an etcd Cluster Member
### Download and Install the etcd Binaries ### Download and Install the etcd Binaries
@ -47,8 +62,7 @@ Extract and install the `etcd` server and the `etcdctl` command line utility:
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance: The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
``` ```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ INTERNAL_IP=$(curl -H "Authorization: Bearer Oracle" -L http://169.254.169.254/opc/v2/vnics | jq -r .[0].privateIp)
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance: Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
@ -68,7 +82,7 @@ Documentation=https://github.com/coreos
[Service] [Service]
Type=notify Type=notify
ExecStart=/usr/local/bin/etcd \\ ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\ --name ${ETCD_NAME}.subnet.vcn.oraclevcn.com \\
--cert-file=/etc/etcd/kubernetes.pem \\ --cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\ --key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\ --peer-cert-file=/etc/etcd/kubernetes.pem \\
@ -82,7 +96,7 @@ ExecStart=/usr/local/bin/etcd \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\ --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\ --advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\ --initial-cluster-token etcd-cluster-0 \\
--initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\ --initial-cluster controller-0.subnet.vcn.oraclevcn.com=https://10.240.0.10:2380,controller-1.subnet.vcn.oraclevcn.com=https://10.240.0.11:2380,controller-2.subnet.vcn.oraclevcn.com=https://10.240.0.12:2380 \\
--initial-cluster-state new \\ --initial-cluster-state new \\
--data-dir=/var/lib/etcd --data-dir=/var/lib/etcd
Restart=on-failure Restart=on-failure

View File

@ -4,16 +4,18 @@ In this lab you will bootstrap the Kubernetes control plane across three compute
## Prerequisites ## Prerequisites
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using our `oci-ssh` shell function. Example:
``` ```
gcloud compute ssh controller-0 oci-ssh controller-0
``` ```
### Running commands in parallel with tmux ### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab. [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
**Note**: please import the shell functions defined [here](02-client-tools.md#shell-functions) in each tmux window/pane, as we will make use of them.
## Provision the Kubernetes Control Plane ## Provision the Kubernetes Control Plane
Create the Kubernetes configuration directory: Create the Kubernetes configuration directory:
@ -58,8 +60,7 @@ Install the Kubernetes binaries:
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance: The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
``` ```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ INTERNAL_IP=$(curl -H "Authorization: Bearer Oracle" -L http://169.254.169.254/opc/v2/vnics | jq -r .[0].privateIp)
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
Create the `kube-apiserver.service` systemd unit file: Create the `kube-apiserver.service` systemd unit file:
@ -201,7 +202,7 @@ EOF
### Enable HTTP Health Checks ### Enable HTTP Health Checks
A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`. Our [OCI Load Balancer](https://docs.oracle.com/en-us/iaas/Content/Balance/Concepts/balanceoverview.htm) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. OCI Load Balancers only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `8888` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
> The `/healthz` API server endpoint does not require authentication by default. > The `/healthz` API server endpoint does not require authentication by default.
@ -215,8 +216,7 @@ sudo apt-get install -y nginx
``` ```
cat > kubernetes.default.svc.cluster.local <<EOF cat > kubernetes.default.svc.cluster.local <<EOF
server { server {
listen 80; listen 8888;
server_name kubernetes.default.svc.cluster.local;
location /healthz { location /healthz {
proxy_pass https://127.0.0.1:6443/healthz; proxy_pass https://127.0.0.1:6443/healthz;
@ -261,7 +261,7 @@ etcd-2 Healthy {"health":"true"}
Test the nginx HTTP health check proxy: Test the nginx HTTP health check proxy:
``` ```
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1:8888/healthz
``` ```
``` ```
@ -288,7 +288,7 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes. The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
``` ```
gcloud compute ssh controller-0 oci-ssh controller-0
``` ```
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
@ -341,55 +341,16 @@ EOF
## The Kubernetes Frontend Load Balancer ## The Kubernetes Frontend Load Balancer
In this section you will provision an external load balancer to front the Kubernetes API Servers. The `kubernetes-the-hard-way` static IP address will be attached to the resulting load balancer. Now we'll verify external connectivity to our cluster via the external Load Balancer
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
### Provision a Network Load Balancer
Create the external load balancer network resources:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
gcloud compute http-health-checks create kubernetes \
--description "Kubernetes Health Check" \
--host "kubernetes.default.svc.cluster.local" \
--request-path "/healthz"
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
--allow tcp
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
}
```
### Verification ### Verification
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**. > The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
Retrieve the `kubernetes-the-hard-way` static IP address: Retrieve the `kubernetes-the-hard-way` Load Balancer public IP address:
``` ```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(oci lb load-balancer list --all | jq '.data[] | select(."display-name"=="kubernetes-the-hard-way")' | jq -r '."ip-addresses"[0]."ip-address"')
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
``` ```
Make a HTTP request for the Kubernetes version info: Make a HTTP request for the Kubernetes version info:

View File

@ -4,16 +4,18 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp
## Prerequisites ## Prerequisites
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using our `oci-ssh` shell function. Example:
``` ```
gcloud compute ssh worker-0 oci-ssh worker-0
``` ```
### Running commands in parallel with tmux ### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab. [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
**Note**: please import the shell functions defined [here](02-client-tools.md#shell-functions) in each tmux window/pane, as we will make use of them.
## Provisioning a Kubernetes Worker Node ## Provisioning a Kubernetes Worker Node
Install the OS dependencies: Install the OS dependencies:
@ -21,7 +23,7 @@ Install the OS dependencies:
``` ```
{ {
sudo apt-get update sudo apt-get update
sudo apt-get -y install socat conntrack ipset sudo apt-get -y install socat conntrack ipset jq
} }
``` ```
@ -37,7 +39,7 @@ Verify if swap is enabled:
sudo swapon --show sudo swapon --show
``` ```
If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately: If output is empty then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
``` ```
sudo swapoff -a sudo swapoff -a
@ -45,6 +47,17 @@ sudo swapoff -a
> To ensure swap remains off after reboot consult your Linux distro documentation. > To ensure swap remains off after reboot consult your Linux distro documentation.
### Set net.bridge.bridge-nf-call-iptables=1
To allow pods to access service endpoints whose backend pods run on the same node, run:
```
sudo modprobe br-netfilter
sudo sysctl -w net.bridge.bridge-nf-call-iptables=1
```
See [here](https://github.com/kelseyhightower/kubernetes-the-hard-way/issues/561) for reference.
### Download and Install Worker Binaries ### Download and Install Worker Binaries
``` ```
@ -90,8 +103,7 @@ Install the worker binaries:
Retrieve the Pod CIDR range for the current compute instance: Retrieve the Pod CIDR range for the current compute instance:
``` ```
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ POD_CIDR=$(curl -H "Authorization: Bearer Oracle" -L http://169.254.169.254/opc/v2/instance | jq -r '.metadata["pod-cidr"]')
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
``` ```
Create the `bridge` network configuration file: Create the `bridge` network configuration file:
@ -224,6 +236,7 @@ Requires=containerd.service
[Service] [Service]
ExecStart=/usr/local/bin/kubelet \\ ExecStart=/usr/local/bin/kubelet \\
--hostname-override=${HOSTNAME}.subnet.vcn.oraclevcn.com \\
--config=/var/lib/kubelet/kubelet-config.yaml \\ --config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\ --container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\ --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
@ -297,8 +310,7 @@ EOF
List the registered Kubernetes nodes: List the registered Kubernetes nodes:
``` ```
gcloud compute ssh controller-0 \ oci-ssh controller-0 "kubectl get nodes --kubeconfig admin.kubeconfig"
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
``` ```
> output > output

View File

@ -12,9 +12,7 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
``` ```
{ {
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(oci lb load-balancer list --all | jq '.data[] | select(."display-name"=="kubernetes-the-hard-way")' | jq -r '."ip-addresses"[0]."ip-address"')
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \

View File

@ -8,14 +8,16 @@ In this lab you will create a route for each worker node that maps the node's Po
## The Routing Table ## The Routing Table
In this section you will gather the information required to create routes in the `kubernetes-the-hard-way` VPC network. In this section you will gather the information required to create routes in the `kubernetes-the-hard-way` VCN.
Print the internal IP address and Pod CIDR range for each worker instance: Print the internal IP address and Pod CIDR range for each worker instance:
``` ```
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \ NODE_ID=$(oci compute instance list --lifecycle-state RUNNING --display-name $instance | jq -r .data[0].id)
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' PRIVATE_IP=$(oci compute instance list-vnics --instance-id $NODE_ID | jq -r '.data[0]["private-ip"]')
POD_CIDR=$(oci compute instance list --lifecycle-state RUNNING --display-name $instance | jq -r '.data[0].metadata["pod-cidr"]')
echo "$PRIVATE_IP $POD_CIDR"
done done
``` ```
@ -29,32 +31,50 @@ done
## Routes ## Routes
Create network routes for each worker instance: Here, we'll update our Route Table to include, for each worker node, a route from the worker node's pod CIDR to the worker node's private address:
``` ```
for i in 0 1 2; do {
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ ROUTE_TABLE_ID=$(oci network route-table list --display-name kubernetes-the-hard-way --vcn-id $VCN_ID | jq -r .data[0].id)
--network kubernetes-the-hard-way \
--next-hop-address 10.240.0.2${i} \
--destination-range 10.200.${i}.0/24
done
```
List the routes in the `kubernetes-the-hard-way` VPC network: # Fetch worker-0's private IP OCID
NODE_ID=$(oci compute instance list --lifecycle-state RUNNING --display-name worker-0 | jq -r .data[0].id)
``` VNIC_ID=$(oci compute instance list-vnics --instance-id $NODE_ID | jq -r '.data[0]["id"]')
gcloud compute routes list --filter "network: kubernetes-the-hard-way" PRIVATE_IP_WORKER_0=$(oci network private-ip list --vnic-id $VNIC_ID | jq -r '.data[0]["id"]')
``` # Fetch worker-1's private IP OCID
NODE_ID=$(oci compute instance list --lifecycle-state RUNNING --display-name worker-1 | jq -r .data[0].id)
> output VNIC_ID=$(oci compute instance list-vnics --instance-id $NODE_ID | jq -r '.data[0]["id"]')
PRIVATE_IP_WORKER_1=$(oci network private-ip list --vnic-id $VNIC_ID | jq -r '.data[0]["id"]')
``` # Fetch worker-2's private IP OCID
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY NODE_ID=$(oci compute instance list --lifecycle-state RUNNING --display-name worker-2 | jq -r .data[0].id)
default-route-6be823b741087623 kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 VNIC_ID=$(oci compute instance list-vnics --instance-id $NODE_ID | jq -r '.data[0]["id"]')
default-route-cebc434ce276fafa kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 0 PRIVATE_IP_WORKER_2=$(oci network private-ip list --vnic-id $VNIC_ID | jq -r '.data[0]["id"]')
kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000 INTERNET_GATEWAY_ID=$(oci network internet-gateway list --vcn-id $VCN_ID | jq -r '.data[0]["id"]')
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
oci network route-table update --rt-id $ROUTE_TABLE_ID --force --route-rules "[
{
\"destination\": \"0.0.0.0/0\",
\"destination-type\": \"CIDR_BLOCK\",
\"network-entity-id\": \"$INTERNET_GATEWAY_ID\"
},
{
\"destination\": \"10.200.0.0/24\",
\"destination-type\": \"CIDR_BLOCK\",
\"network-entity-id\": \"$PRIVATE_IP_WORKER_0\"
},
{
\"destination\": \"10.200.1.0/24\",
\"destination-type\": \"CIDR_BLOCK\",
\"network-entity-id\": \"$PRIVATE_IP_WORKER_1\"
},
{
\"destination\": \"10.200.2.0/24\",
\"destination-type\": \"CIDR_BLOCK\",
\"network-entity-id\": \"$PRIVATE_IP_WORKER_2\"
}
]"
}
``` ```
Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md) Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)

View File

@ -16,8 +16,8 @@ kubectl create secret generic kubernetes-the-hard-way \
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
``` ```
gcloud compute ssh controller-0 \ oci-ssh controller-0 \
--command "sudo ETCDCTL_API=3 etcdctl get \ "sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \ --endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \ --cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \ --cert=/etc/etcd/kubernetes.pem \
@ -181,19 +181,11 @@ NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}') --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
``` ```
Create a firewall rule that allows remote access to the `nginx` node port:
```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \
--network kubernetes-the-hard-way
```
Retrieve the external IP address of a worker instance: Retrieve the external IP address of a worker instance:
``` ```
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ NODE_ID=$(oci compute instance list --lifecycle-state RUNNING --display-name worker-0 | jq -r .data[0].id)
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)') EXTERNAL_IP=$(oci compute instance list-vnics --instance-id $NODE_ID | jq -r '.data[0]["public-ip"]')
``` ```
Make an HTTP request using the external IP address and the `nginx` node port: Make an HTTP request using the external IP address and the `nginx` node port:

View File

@ -7,50 +7,51 @@ In this lab you will delete the compute resources created during this tutorial.
Delete the controller and worker compute instances: Delete the controller and worker compute instances:
``` ```
gcloud -q compute instances delete \ for instance in controller-0 controller-1 controller-2 worker-0 worker-1 worker-2; do
controller-0 controller-1 controller-2 \ NODE_ID=$(oci compute instance list --lifecycle-state RUNNING --display-name $instance | jq -r .data[0].id)
worker-0 worker-1 worker-2 \ oci compute instance terminate --instance-id $NODE_ID --force
--zone $(gcloud config get-value compute/zone) done
``` ```
## Networking ## Networking
Delete the external load balancer network resources: Delete the Load Balancer:
``` ```
{ {
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ LOAD_BALANCER_ID=$(oci lb load-balancer list --display-name kubernetes-the-hard-way | jq -r .data[0].id)
--region $(gcloud config get-value compute/region) oci lb load-balancer delete --load-balancer-id $LOAD_BALANCER_ID --force --wait-for-state SUCCEEDED
gcloud -q compute target-pools delete kubernetes-target-pool
gcloud -q compute http-health-checks delete kubernetes
gcloud -q compute addresses delete kubernetes-the-hard-way
} }
``` ```
Delete the `kubernetes-the-hard-way` firewall rules: Delete all resources within the `kubernetes-the-hard-way` VCN:
```
gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \
kubernetes-the-hard-way-allow-external \
kubernetes-the-hard-way-allow-health-check
```
Delete the `kubernetes-the-hard-way` network VPC:
``` ```
{ {
gcloud -q compute routes delete \ VCN_ID=$(oci network vcn list --display-name kubernetes-the-hard-way | jq -r .data[0].id)
kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \
kubernetes-route-10-200-2-0-24
gcloud -q compute networks subnets delete kubernetes SUBNET_ID=$(oci network subnet list --display-name kubernetes --vcn-id $VCN_ID | jq -r .data[0].id)
oci network subnet delete --subnet-id $SUBNET_ID --force
gcloud -q compute networks delete kubernetes-the-hard-way ROUTE_TABLE_ID=$(oci network route-table list --display-name kubernetes-the-hard-way --vcn-id $VCN_ID | jq -r .data[0].id)
oci network route-table delete --rt-id $ROUTE_TABLE_ID --force
INTERNET_GATEWAY_ID=$(oci network internet-gateway list --display-name kubernetes-the-hard-way --vcn-id $VCN_ID | jq -r .data[0].id)
oci network internet-gateway delete --ig-id $INTERNET_GATEWAY_ID --force
SECURITY_LIST_ID=$(oci network security-list list --vcn-id $VCN_ID --display-name intra-vcn | jq -r .data[0].id)
oci network security-list delete --security-list-id $SECURITY_LIST_ID --force
SECURITY_LIST_ID=$(oci network security-list list --vcn-id $VCN_ID --display-name load-balancer | jq -r .data[0].id)
oci network security-list delete --security-list-id $SECURITY_LIST_ID --force
SECURITY_LIST_ID=$(oci network security-list list --vcn-id $VCN_ID --display-name worker | jq -r .data[0].id)
oci network security-list delete --security-list-id $SECURITY_LIST_ID --force
} }
``` ```
And finally, the VCN itself:
```
oci network vcn delete --vcn-id $VCN_ID --force
```