Various documentation polishing and improved shell helper functions (#1)

This commit is contained in:
Dan Simone
2021-02-09 16:32:30 -08:00
parent ee9a06afab
commit 6752b91612
10 changed files with 158 additions and 88 deletions

View File

@@ -157,15 +157,35 @@ Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCom
In your terminal, run the following to define a few shell helper functions that we'll use throughout the tutorial:
```
function oci-fetch-public-ip(){
# Helper function to fetch and stash the public IP for the given OCI compute instance
if [ -z "$1" ]
then
echo "Usage: oci-fetch-public-ip <compute_instance_name> <optional_command>"
else
file=.kubernetes-the-hard-way/$1/public_ip
if [ ! -f "$file" ] || [ ! -s "$file" ];
then
# Fetch the public IP and stash it for quick lookup by later commands
echo "Fetching $1 Public IP..."
mkdir -p .kubernetes-the-hard-way/$1
ocid=$(oci compute instance list --lifecycle-state RUNNING --display-name $1 \
| jq -r .data[0].id)
oci compute instance list-vnics --instance-id $ocid | jq -r '.data[0]["public-ip"]' \
> .kubernetes-the-hard-way/$1/public_ip
fi
fi
}
function oci-ssh(){
# Helper function to ssh into a named OCI compute instance
if [ -z "$1" ]
then
echo "Usage: oci-ssh <compute_instance_name> <optional_command>"
else
ocid=$(oci compute instance list --lifecycle-state RUNNING --display-name $1 | jq -r .data[0].id)
ip=$(oci compute instance list-vnics --instance-id $ocid | jq -r '.data[0]["public-ip"]')
ssh -i kubernetes_ssh_rsa ubuntu@$ip $2
oci-fetch-public-ip $1
public_ip=$(cat .kubernetes-the-hard-way/$1/public_ip)
ssh -i kubernetes_ssh_rsa ubuntu@$public_ip $2
fi
}
@@ -175,11 +195,17 @@ function oci-scp(){
then
echo "Usage: oci-scp <local_file_list> <compute_instance_name> <destination>"
else
ocid=$(oci compute instance list --lifecycle-state RUNNING --display-name ${@: (-2):1} | jq -r .data[0].id)
ip=$(oci compute instance list-vnics --instance-id $ocid | jq -r '.data[0]["public-ip"]')
scp -i kubernetes_ssh_rsa "${@:1:$#-2}" ubuntu@$ip:${@: -1}
oci-fetch-public-ip ${@: (-2):1}
public_ip=$(cat .kubernetes-the-hard-way/${@: (-2):1}/public_ip)
scp -i kubernetes_ssh_rsa "${@:1:$#-2}" ubuntu@$public_ip:${@: -1}
fi
}
```
For convenience throughout the rest of this tutorial, you can copy the above functions into your shell's profile, to avoid having to redefine them in each of the various tmux terminals we'll create. For example, for Bash shell, copy and past the above functions into `~/.bashrc`, then refresh the profile from your current terminal session with:
````
. ~/.bashrc
```
Next: [Provisioning Compute Resources](03-compute-resources.md)

View File

@@ -17,7 +17,8 @@ In this section a dedicated [Virtual Cloud Network](https://www.oracle.com/cloud
Create the `kubernetes-the-hard-way` custom VCN:
```
VCN_ID=$(oci network vcn create --display-name kubernetes-the-hard-way --dns-label vcn --cidr-block 10.240.0.0/24 | jq -r .data.id)
VCN_ID=$(oci network vcn create --display-name kubernetes-the-hard-way --dns-label vcn --cidr-block \
10.240.0.0/24 | jq -r .data.id)
```
A [subnet](https://docs.oracle.com/en-us/iaas/Content/Network/Tasks/managingVCNs_topic-Overview_of_VCNs_and_Subnets.htm#Overview) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
@@ -25,11 +26,12 @@ A [subnet](https://docs.oracle.com/en-us/iaas/Content/Network/Tasks/managingVCNs
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VCN, along with a Route Table and Internet Gateway allowing traffic to the internet.
```
INTERNET_GATEWAY_ID=$(oci network internet-gateway create --vcn-id $VCN_ID --is-enabled true \
--display-name kubernetes-the-hard-way | jq -r .data.id)
ROUTE_TABLE_ID=$(oci network route-table create --vcn-id $VCN_ID --display-name kubernetes-the-hard-way \
--route-rules "[{\"cidrBlock\":\"0.0.0.0/0\",\"networkEntityId\":\"$INTERNET_GATEWAY_ID\"}]" | jq -r .data.id)
SUBNET_ID=$(oci network subnet create --display-name kubernetes --dns-label subnet --vcn-id $VCN_ID \
INTERNET_GATEWAY_ID=$(oci network internet-gateway create --display-name kubernetes-the-hard-way \
--vcn-id $VCN_ID --is-enabled true | jq -r .data.id)
ROUTE_TABLE_ID=$(oci network route-table create --display-name kubernetes-the-hard-way --vcn-id $VCN_ID \
--route-rules "[{\"cidrBlock\":\"0.0.0.0/0\",\"networkEntityId\":\"$INTERNET_GATEWAY_ID\"}]" \
| jq -r .data.id)
SUBNET_ID=$(oci network subnet create --display-name kubernetes --vcn-id $VCN_ID --dns-label subnet \
--cidr-block 10.240.0.0/24 --route-table-id $ROUTE_TABLE_ID | jq -r .data.id)
```
@@ -72,10 +74,11 @@ kubernetes_ssh_rsa.pub
Create three compute instances which will host the Kubernetes control plane:
```
IMAGE_ID=$(oci compute image list --operating-system "Canonical Ubuntu" --operating-system-version "20.04" | jq -r .data[0].id)
IMAGE_ID=$(oci compute image list --operating-system "Canonical Ubuntu" --operating-system-version \
"20.04" | jq -r .data[0].id)
NUM_ADS=$(oci iam availability-domain list | jq -r .data | jq length)
for i in 0 1 2; do
# Rudimentary spreading of nodes across Availability Domains and Fault Domains
NUM_ADS=$(oci iam availability-domain list | jq -r .data | jq length)
# Rudimentary distributing of nodes across Availability Domains and Fault Domains
AD_NAME=$(oci iam availability-domain list | jq -r .data[$((i % NUM_ADS))].name)
NUM_FDS=$(oci iam fault-domain list --availability-domain $AD_NAME | jq -r .data | jq length)
FD_NAME=$(oci iam fault-domain list --availability-domain $AD_NAME | jq -r .data[$((i % NUM_FDS))].name)
@@ -83,7 +86,8 @@ for i in 0 1 2; do
oci compute instance launch --display-name controller-${i} --assign-public-ip true \
--subnet-id $SUBNET_ID --shape VM.Standard.E3.Flex --availability-domain $AD_NAME \
--fault-domain $FD_NAME --image-id $IMAGE_ID --shape-config '{"memoryInGBs": 8.0, "ocpus": 2.0}' \
--private-ip 10.240.0.1${i} --freeform-tags \'{"project": "kubernetes-the-hard-way","role":"controller"}' \
--private-ip 10.240.0.1${i} \
--freeform-tags '{"project": "kubernetes-the-hard-way","role":"controller"}' \
--metadata "{\"ssh_authorized_keys\":\"$(cat kubernetes_ssh_rsa.pub)\"}"
done
```
@@ -97,10 +101,11 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste
Create three compute instances which will host the Kubernetes worker nodes:
```
IMAGE_ID=$(oci compute image list --operating-system "Canonical Ubuntu" --operating-system-version "20.04" | jq -r .data[0].id)
IMAGE_ID=$(oci compute image list --operating-system "Canonical Ubuntu" --operating-system-version \
"20.04" | jq -r .data[0].id)
NUM_ADS=$(oci iam availability-domain list | jq -r .data | jq length)
for i in 0 1 2; do
# Rudimentary spreading of nodes across Availability Domains and Fault Domains
NUM_ADS=$(oci iam availability-domain list | jq -r .data | jq length)
# Rudimentary distributing of nodes across Availability Domains and Fault Domains
AD_NAME=$(oci iam availability-domain list | jq -r .data[$((i % NUM_ADS))].name)
NUM_FDS=$(oci iam fault-domain list --availability-domain $AD_NAME | jq -r .data | jq length)
FD_NAME=$(oci iam fault-domain list --availability-domain $AD_NAME | jq -r .data[$((i % NUM_FDS))].name)
@@ -108,7 +113,8 @@ for i in 0 1 2; do
oci compute instance launch --display-name worker-${i} --assign-public-ip true \
--subnet-id $SUBNET_ID --shape VM.Standard.E3.Flex --availability-domain $AD_NAME \
--fault-domain $FD_NAME --image-id $IMAGE_ID --shape-config '{"memoryInGBs": 8.0, "ocpus": 2.0}' \
--private-ip 10.240.0.2${i} --freeform-tags '{"project": "kubernetes-the-hard-way","role":"worker"}' \
--private-ip 10.240.0.2${i} \
--freeform-tags '{"project": "kubernetes-the-hard-way","role":"worker"}' \
--metadata "{\"ssh_authorized_keys\":\"$(cat kubernetes_ssh_rsa.pub)\",\"pod-cidr\":\"10.200.${i}.0/24\"}" \
--skip-source-dest-check true
done
@@ -119,7 +125,8 @@ done
List the compute instances in our compartment:
```
oci compute instance list --sort-by DISPLAYNAME --lifecycle-state RUNNING --all | jq -r .data[] | jq '{"display-name","lifecycle-state"}'
oci compute instance list --sort-by DISPLAYNAME --lifecycle-state RUNNING --all | jq -r .data[] \
| jq '{"display-name","lifecycle-state"}'
```
> output
@@ -193,57 +200,65 @@ For use in later steps of the tutorial, we'll create Security Lists to allow:
- Public access to the LoadBalancer port.
```
INTRA_VCN_SECURITY_LIST_ID=$(oci network security-list create --vcn-id $VCN_ID --display-name intra-vcn --ingress-security-rules '[
{
"icmp-options": null,
"is-stateless": true,
"protocol": "all",
"source": "10.240.0.0/24",
"source-type": "CIDR_BLOCK",
"tcp-options": null,
"udp-options": null
}]' --egress-security-rules '[]' | jq -r .data.id)
WORKER_SECURITY_LIST_ID=$(oci network security-list create --vcn-id $VCN_ID --display-name worker --ingress-security-rules '[
{
"icmp-options": null,
"is-stateless": false,
"protocol": "6",
"source": "0.0.0.0/0",
"source-type": "CIDR_BLOCK",
"tcp-options": {
"destination-port-range": {
"max": 32767,
"min": 30000
INTRA_VCN_SECURITY_LIST_ID=$(oci network security-list create --display-name intra-vcn \
--vcn-id $VCN_ID --ingress-security-rules '[
{
"icmp-options": null,
"is-stateless": true,
"protocol": "all",
"source": "10.240.0.0/24",
"source-type": "CIDR_BLOCK",
"tcp-options": null,
"udp-options": null
}]' --egress-security-rules '[]' | jq -r .data.id)
WORKER_SECURITY_LIST_ID=$(oci network security-list create --display-name worker \
--vcn-id $VCN_ID --ingress-security-rules '[
{
"icmp-options": null,
"is-stateless": false,
"protocol": "6",
"source": "0.0.0.0/0",
"source-type": "CIDR_BLOCK",
"tcp-options": {
"destination-port-range": {
"max": 32767,
"min": 30000
},
"source-port-range": null
},
"source-port-range": null
},
"udp-options": null
}]' --egress-security-rules '[]' | jq -r .data.id)
LB_SECURITY_LIST_ID=$(oci network security-list create --vcn-id $VCN_ID --display-name load-balancer --ingress-security-rules '[
{
"icmp-options": null,
"is-stateless": false,
"protocol": "6",
"source": "0.0.0.0/0",
"source-type": "CIDR_BLOCK",
"tcp-options": {
"destination-port-range": {
"max": 6443,
"min": 6443
"udp-options": null
}]' --egress-security-rules '[]' | jq -r .data.id)
LB_SECURITY_LIST_ID=$(oci network security-list create --display-name load-balancer \
--vcn-id $VCN_ID --ingress-security-rules '[
{
"icmp-options": null,
"is-stateless": false,
"protocol": "6",
"source": "0.0.0.0/0",
"source-type": "CIDR_BLOCK",
"tcp-options": {
"destination-port-range": {
"max": 6443,
"min": 6443
},
"source-port-range": null
},
"source-port-range": null
},
"udp-options": null
}]' --egress-security-rules '[]' | jq -r .data.id)
"udp-options": null
}]' --egress-security-rules '[]' | jq -r .data.id)
}
```
We'll add these Security Lists to our subnet:
```
DEFAULT_SECURITY_LIST_ID=$(oci network security-list list --display-name "Default Security List for kubernetes-the-hard-way" | jq -r .data[0].id)
oci network subnet update --subnet-id $SUBNET_ID --security-list-ids \
"[\"$DEFAULT_SECURITY_LIST_ID\",\"$INTRA_VCN_SECURITY_LIST_ID\",\"$WORKER_SECURITY_LIST_ID\",\"$LB_SECURITY_LIST_ID\"]" --force
{
DEFAULT_SECURITY_LIST_ID=$(oci network security-list list --display-name \
"Default Security List for kubernetes-the-hard-way" | jq -r .data[0].id)
oci network subnet update --subnet-id $SUBNET_ID --force --security-list-ids \
"[\"$DEFAULT_SECURITY_LIST_ID\",\"$INTRA_VCN_SECURITY_LIST_ID\",\"$WORKER_SECURITY_LIST_ID\",\"$LB_SECURITY_LIST_ID\"]"
}
```
### Firewall Rules
@@ -272,6 +287,7 @@ LOADBALANCER_ID=$(oci lb load-balancer create --display-name kubernetes-the-har
Create a Backend Set, with Backends for the our 3 controller nodes:
```
{
cat > backends.json <<EOF
[
{
@@ -297,9 +313,10 @@ oci lb backend-set create --name controller-backend-set --load-balancer-id $LOAD
--health-checker-url-path "/healthz" --policy "ROUND_ROBIN" --wait-for-state SUCCEEDED
oci lb listener create --name controller-listener --default-backend-set-name controller-backend-set \
--port 6443 --protocol TCP --load-balancer-id $LOADBALANCER_ID --wait-for-state SUCCEEDED
--port 6443 --protocol TCP --load-balancer-id $LOADBALANCER_ID --wait-for-state SUCCEEDED
}
```
At this point, the Load Balancer will be shown as in a "Critical" state. This will be case until we configure the API server on the controller nodes in subsequent steps.
At this point, the Load Balancer will be shown as in a "Critical" state - that's ok. This will be case until we configure the API server on the controller nodes in subsequent steps.
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)

View File

@@ -295,7 +295,9 @@ Generate the Kubernetes API Server certificate and private key:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(oci lb load-balancer list --all | jq '.data[] | select(."display-name"=="kubernetes-the-hard-way")' | jq -r '."ip-addresses"[0]."ip-address"')
KUBERNETES_PUBLIC_ADDRESS=$(oci lb load-balancer list --all \
| jq '.data[] | select(."display-name"=="kubernetes-the-hard-way")' \
| jq -r '."ip-addresses"[0]."ip-address"')
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
cat > kubernetes-csr.json <<EOF

View File

@@ -13,7 +13,9 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
Retrieve the `kubernetes-the-hard-way` Load Balancer public IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(oci lb load-balancer list --all | jq '.data[] | select(."display-name"=="kubernetes-the-hard-way")' | jq -r '."ip-addresses"[0]."ip-address"')
KUBERNETES_PUBLIC_ADDRESS=$(oci lb load-balancer list --all \
| jq '.data[] | select(."display-name"=="kubernetes-the-hard-way")' \
| jq -r '."ip-addresses"[0]."ip-address"')
```
### The kubelet Kubernetes Configuration File

View File

@@ -14,7 +14,7 @@ oci-ssh controller-0
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
**Note**: please import the shell functions defined [here](02-client-tools.md#shell-functions) in each tmux window/pane, as we will make use of them.
**Note**: Please ensure you've imported the shell functions defined [here](02-client-tools.md#shell-functions), either within your shell profile or explicitly within each tmux window/pane.
## Required Tools

View File

@@ -350,7 +350,9 @@ Now we'll verify external connectivity to our cluster via the external Load Bala
Retrieve the `kubernetes-the-hard-way` Load Balancer public IP address:
```
KUBERNETES_PUBLIC_ADDRESS=$(oci lb load-balancer list --all | jq '.data[] | select(."display-name"=="kubernetes-the-hard-way")' | jq -r '."ip-addresses"[0]."ip-address"')
KUBERNETES_PUBLIC_ADDRESS=$(oci lb load-balancer list --all \
| jq '.data[] | select(."display-name"=="kubernetes-the-hard-way")' \
| jq -r '."ip-addresses"[0]."ip-address"')
```
Make a HTTP request for the Kubernetes version info:

View File

@@ -14,7 +14,7 @@ oci-ssh worker-0
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
**Note**: please import the shell functions defined [here](02-client-tools.md#shell-functions) in each tmux window/pane, as we will make use of them.
**Note**: Please ensure you've imported the shell functions defined [here](02-client-tools.md#shell-functions), either within your shell profile or explicitly within each tmux window/pane.
## Provisioning a Kubernetes Worker Node

View File

@@ -12,8 +12,10 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
```
{
KUBERNETES_PUBLIC_ADDRESS=$(oci lb load-balancer list --all | jq '.data[] | select(."display-name"=="kubernetes-the-hard-way")' | jq -r '."ip-addresses"[0]."ip-address"')
KUBERNETES_PUBLIC_ADDRESS=$(oci lb load-balancer list --all \
| jq '.data[] | select(."display-name"=="kubernetes-the-hard-way")' \
| jq -r '."ip-addresses"[0]."ip-address"')
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \

View File

@@ -14,9 +14,12 @@ Print the internal IP address and Pod CIDR range for each worker instance:
```
for instance in worker-0 worker-1 worker-2; do
NODE_ID=$(oci compute instance list --lifecycle-state RUNNING --display-name $instance | jq -r .data[0].id)
PRIVATE_IP=$(oci compute instance list-vnics --instance-id $NODE_ID | jq -r '.data[0]["private-ip"]')
POD_CIDR=$(oci compute instance list --lifecycle-state RUNNING --display-name $instance | jq -r '.data[0].metadata["pod-cidr"]')
NODE_ID=$(oci compute instance list --display-name $instance --lifecycle-state RUNNING \
| jq -r .data[0].id)
PRIVATE_IP=$(oci compute instance list-vnics --instance-id $NODE_ID \
| jq -r '.data[0]["private-ip"]')
POD_CIDR=$(oci compute instance list --display-name $instance --lifecycle-state RUNNING \
| jq -r '.data[0].metadata["pod-cidr"]')
echo "$PRIVATE_IP $POD_CIDR"
done
```
@@ -35,16 +38,19 @@ Here, we'll update our Route Table to include, for each worker node, a route fro
```
{
ROUTE_TABLE_ID=$(oci network route-table list --display-name kubernetes-the-hard-way --vcn-id $VCN_ID | jq -r .data[0].id)
ROUTE_TABLE_ID=$(oci network route-table list --display-name kubernetes-the-hard-way \
--vcn-id $VCN_ID | jq -r .data[0].id)
# Fetch worker-0's private IP OCID
NODE_ID=$(oci compute instance list --lifecycle-state RUNNING --display-name worker-0 | jq -r .data[0].id)
VNIC_ID=$(oci compute instance list-vnics --instance-id $NODE_ID | jq -r '.data[0]["id"]')
PRIVATE_IP_WORKER_0=$(oci network private-ip list --vnic-id $VNIC_ID | jq -r '.data[0]["id"]')
# Fetch worker-1's private IP OCID
NODE_ID=$(oci compute instance list --lifecycle-state RUNNING --display-name worker-1 | jq -r .data[0].id)
VNIC_ID=$(oci compute instance list-vnics --instance-id $NODE_ID | jq -r '.data[0]["id"]')
PRIVATE_IP_WORKER_1=$(oci network private-ip list --vnic-id $VNIC_ID | jq -r '.data[0]["id"]')
# Fetch worker-2's private IP OCID
NODE_ID=$(oci compute instance list --lifecycle-state RUNNING --display-name worker-2 | jq -r .data[0].id)
VNIC_ID=$(oci compute instance list-vnics --instance-id $NODE_ID | jq -r '.data[0]["id"]')

View File

@@ -8,8 +8,9 @@ Delete the controller and worker compute instances:
```
for instance in controller-0 controller-1 controller-2 worker-0 worker-1 worker-2; do
NODE_ID=$(oci compute instance list --lifecycle-state RUNNING --display-name $instance | jq -r .data[0].id)
oci compute instance terminate --instance-id $NODE_ID --force
NODE_ID=$(oci compute instance list --lifecycle-state RUNNING --display-name $instance \
| jq -r .data[0].id)
oci compute instance terminate --instance-id $NODE_ID --wait-for-state TERMINATED --force
done
```
@@ -30,28 +31,40 @@ Delete all resources within the `kubernetes-the-hard-way` VCN:
{
VCN_ID=$(oci network vcn list --display-name kubernetes-the-hard-way | jq -r .data[0].id)
SUBNET_ID=$(oci network subnet list --display-name kubernetes --vcn-id $VCN_ID | jq -r .data[0].id)
SUBNET_ID=$(oci network subnet list --display-name kubernetes --vcn-id $VCN_ID \
| jq -r .data[0].id)
oci network subnet delete --subnet-id $SUBNET_ID --force
ROUTE_TABLE_ID=$(oci network route-table list --display-name kubernetes-the-hard-way --vcn-id $VCN_ID | jq -r .data[0].id)
ROUTE_TABLE_ID=$(oci network route-table list --display-name kubernetes-the-hard-way \
--vcn-id $VCN_ID | jq -r .data[0].id)
oci network route-table delete --rt-id $ROUTE_TABLE_ID --force
INTERNET_GATEWAY_ID=$(oci network internet-gateway list --display-name kubernetes-the-hard-way --vcn-id $VCN_ID | jq -r .data[0].id)
INTERNET_GATEWAY_ID=$(oci network internet-gateway list --display-name kubernetes-the-hard-way \
--vcn-id $VCN_ID | jq -r .data[0].id)
oci network internet-gateway delete --ig-id $INTERNET_GATEWAY_ID --force
SECURITY_LIST_ID=$(oci network security-list list --vcn-id $VCN_ID --display-name intra-vcn | jq -r .data[0].id)
SECURITY_LIST_ID=$(oci network security-list list --vcn-id $VCN_ID --display-name intra-vcn \
| jq -r .data[0].id)
oci network security-list delete --security-list-id $SECURITY_LIST_ID --force
SECURITY_LIST_ID=$(oci network security-list list --vcn-id $VCN_ID --display-name load-balancer | jq -r .data[0].id)
SECURITY_LIST_ID=$(oci network security-list list --vcn-id $VCN_ID --display-name load-balancer \
| jq -r .data[0].id)
oci network security-list delete --security-list-id $SECURITY_LIST_ID --force
SECURITY_LIST_ID=$(oci network security-list list --vcn-id $VCN_ID --display-name worker | jq -r .data[0].id)
SECURITY_LIST_ID=$(oci network security-list list --vcn-id $VCN_ID --display-name worker \
| jq -r .data[0].id)
oci network security-list delete --security-list-id $SECURITY_LIST_ID --force
}
```
And finally, the VCN itself:
And the VCN itself:
```
oci network vcn delete --vcn-id $VCN_ID --force
```
```
Finally, remove the artifacts generated by our shell helper functions:
```
rm -rf .kubernetes-the-hard-way
```