Added AWS code.

pull/374/head
Wojciech Knapik 2018-07-25 13:18:38 +02:00
parent b974042d95
commit e31f660762
13 changed files with 1594 additions and 2 deletions

View File

@ -22,7 +22,7 @@ Kubernetes The Hard Way guides you through bootstrapping a highly available Kube
## Labs ## Labs
This tutorial assumes you have access to the [Google Cloud Platform](https://cloud.google.com). While GCP is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms. This tutorial assumes you have access to the [Google Cloud Platform](https://cloud.google.com/), or [Amazon Web Services](https://aws.amazon.com/). While GCP/AWS is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms.
* [Prerequisites](docs/01-prerequisites.md) * [Prerequisites](docs/01-prerequisites.md)
* [Installing the Client Tools](docs/02-client-tools.md) * [Installing the Client Tools](docs/02-client-tools.md)

View File

@ -1,5 +1,10 @@
# Prerequisites # Prerequisites
This tutorial uses Google Cloud Platform (`GCP`) to provision instrastructure required by the Kubernetes cluster, however code is also provided for users who prefer to use Amazon Web Services, in expandable sections marked `AWS`.
<details open>
<summary>GCP</summary>
## Google Cloud Platform ## Google Cloud Platform
This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits. This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits.
@ -43,6 +48,44 @@ gcloud config set compute/zone us-west1-c
``` ```
> Use the `gcloud compute zones list` command to view additional regions and zones. > Use the `gcloud compute zones list` command to view additional regions and zones.
</details>
<details>
<summary>AWS</summary>
## Amazon Web Services
This tutorial leverages the [Amazon Web Services](https://aws.amazon.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://portal.aws.amazon.com/billing/signup) for [12 months of free services](https://aws.amazon.com/free/).
> The compute resources required for this tutorial exceed the Amazon Web Services free tier.
## Amazon Web Services CLI
### Install the Amazon Web Services CLI
Follow the Amazon Web Services CLI [documentation](https://aws.amazon.com/cli/) to install and configure the `aws` command line utility.
### Configure the kubernetes-the-hard-way profile
Throughout this tutorial, an aws profile named `kubernetes-the-hard-way` will be used.
Create the profile and set its default region (us-west-2 in this example):
```
aws configure set region us-west-2 \
--profile kubernetes-the-hard-way
```
Set the credentials for the profile to the same set as in the default profile:
```
aws configure set aws_access_key_id "$(aws configure get aws_access_key_id)" \
--profile kubernetes-the-hard-way
aws configure set aws_secret_access_key "$(aws configure get aws_secret_access_key)" \
--profile kubernetes-the-hard-way
```
</details>
## Running Commands in Parallel with tmux ## Running Commands in Parallel with tmux

View File

@ -12,16 +12,52 @@ The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-ad
### Virtual Private Cloud Network ### Virtual Private Cloud Network
In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) (VPC) network will be setup to host the Kubernetes cluster. In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) (VPC) network will be setup to host the Kubernetes cluster.
Create the `kubernetes-the-hard-way` custom VPC network: Create the `kubernetes-the-hard-way` custom VPC network:
<details open>
<summary>GCP</summary>
``` ```
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
``` ```
</details>
<details>
<summary>AWS</summary>
```
VPC_ID="$(aws ec2 create-vpc \
--cidr-block 10.240.0.0/24 \
--profile kubernetes-the-hard-way \
--query Vpc.VpcId \
--output text)"
```
```
for opt in support hostnames; do
aws ec2 modify-vpc-attribute \
--vpc-id "$VPC_ID" \
--enable-dns-"$opt" '{"Value": true}' \
--profile kubernetes-the-hard-way
done
aws ec2 create-tags \
--resources "$VPC_ID" \
--tags Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared \
--profile kubernetes-the-hard-way
```
</details>
<p></p>
A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster. A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
<details open>
<summary>GCP</summary>
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
``` ```
@ -30,12 +66,92 @@ gcloud compute networks subnets create kubernetes \
--range 10.240.0.0/24 --range 10.240.0.0/24
``` ```
</details>
<details>
<summary>AWS</summary>
```
DHCP_OPTIONS_ID="$(aws ec2 create-dhcp-options \
--dhcp-configuration \
"Key=domain-name,Values=$(aws configure get region --profile kubernetes-the-hard-way).compute.internal" \
"Key=domain-name-servers,Values=AmazonProvidedDNS" \
--profile kubernetes-the-hard-way \
--query DhcpOptions.DhcpOptionsId \
--output text)"
aws ec2 create-tags \
--resources "$DHCP_OPTIONS_ID" \
--tags Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared \
--profile kubernetes-the-hard-way
aws ec2 associate-dhcp-options \
--dhcp-options-id "$DHCP_OPTIONS_ID" \
--vpc-id "$VPC_ID" \
--profile kubernetes-the-hard-way
SUBNET_ID="$(aws ec2 create-subnet \
--vpc-id "$VPC_ID" \
--cidr-block 10.240.0.0/24 \
--profile kubernetes-the-hard-way \
--query Subnet.SubnetId \
--output text)"
aws ec2 create-tags \
--resources "$SUBNET_ID" \
--tags Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared \
--profile kubernetes-the-hard-way
INTERNET_GATEWAY_ID="$(aws ec2 create-internet-gateway \
--profile kubernetes-the-hard-way \
--query InternetGateway.InternetGatewayId \
--output text)"
aws ec2 create-tags \
--resources "$INTERNET_GATEWAY_ID" \
--tags Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared \
--profile kubernetes-the-hard-way
aws ec2 attach-internet-gateway \
--internet-gateway-id "$INTERNET_GATEWAY_ID" \
--vpc-id "$VPC_ID" \
--profile kubernetes-the-hard-way
ROUTE_TABLE_ID="$(aws ec2 create-route-table \
--vpc-id "$VPC_ID" \
--profile kubernetes-the-hard-way \
--query RouteTable.RouteTableId \
--output text)"
aws ec2 create-tags \
--resources "$ROUTE_TABLE_ID" \
--tags Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared \
--profile kubernetes-the-hard-way
aws ec2 associate-route-table \
--route-table-id "$ROUTE_TABLE_ID" \
--subnet-id "$SUBNET_ID" \
--profile kubernetes-the-hard-way
aws ec2 create-route \
--route-table-id "$ROUTE_TABLE_ID" \
--destination-cidr-block 0.0.0.0/0 \
--gateway-id "$INTERNET_GATEWAY_ID" \
--profile kubernetes-the-hard-way
```
</details>
<p></p>
> The `10.240.0.0/24` IP address range can host up to 254 compute instances. > The `10.240.0.0/24` IP address range can host up to 254 compute instances.
### Firewall Rules ### Firewall Rules
Create a firewall rule that allows internal communication across all protocols: Create a firewall rule that allows internal communication across all protocols:
<details open>
<summary>GCP</summary>
``` ```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
--allow tcp,udp,icmp \ --allow tcp,udp,icmp \
@ -43,8 +159,48 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
--source-ranges 10.240.0.0/24,10.200.0.0/16 --source-ranges 10.240.0.0/24,10.200.0.0/16
``` ```
</details>
<details>
<summary>AWS</summary>
```
SECURITY_GROUP_ID="$(aws ec2 create-security-group \
--group-name kubernetes-the-hard-way \
--description kubernetes-the-hard-way \
--vpc-id "$VPC_ID" \
--profile kubernetes-the-hard-way \
--query GroupId \
--output text)"
aws ec2 create-tags \
--resources "$SECURITY_GROUP_ID" \
--tags Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared \
--profile kubernetes-the-hard-way
```
```
allow() {
aws ec2 authorize-security-group-ingress \
--profile kubernetes-the-hard-way \
--group-id "$SECURITY_GROUP_ID" \
"$@"
}
allow --protocol all --source-group "$SECURITY_GROUP_ID"
for network in 10.200.0.0/16 10.240.0.0/24; do
allow --protocol all --cidr "$network"
done
```
</details>
<p></p>
Create a firewall rule that allows external SSH, ICMP, and HTTPS: Create a firewall rule that allows external SSH, ICMP, and HTTPS:
<details open>
<summary>GCP</summary>
``` ```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
--allow tcp:22,tcp:6443,icmp \ --allow tcp:22,tcp:6443,icmp \
@ -54,8 +210,29 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
> An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients. > An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients.
</details>
<details>
<summary>AWS</summary>
```
allow --protocol icmp --port 3-4 --cidr 0.0.0.0/0
for port in 22 6443; do
allow --protocol tcp --port "$port" --cidr 0.0.0.0/0
done
```
> An [external load balancer](https://aws.amazon.com/elasticloadbalancing/) will be used to expose the Kubernetes API Servers to remote clients.
</details>
<p></p>
List the firewall rules in the `kubernetes-the-hard-way` VPC network: List the firewall rules in the `kubernetes-the-hard-way` VPC network:
<details open>
<summary>GCP</summary>
``` ```
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
``` ```
@ -68,8 +245,77 @@ kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp
``` ```
</details>
<details>
<summary>AWS</summary>
```
aws ec2 describe-security-groups \
--filters Name=group-id,Values="$SECURITY_GROUP_ID" \
--profile kubernetes-the-hard-way \
--query 'SecurityGroups[0].IpPermissions[].{GroupIds:UserIdGroupPairs[].GroupId,FromPort:FromPort,ToPort:ToPort,IpProtocol:IpProtocol,CidrIps:IpRanges[].CidrIp}' \
--output table|\
sed 's/| *DescribeSecurityGroups *|//g'|\
tail -n +3
```
> output
```
+----------+--------------+----------+
| FromPort | IpProtocol | ToPort |
+----------+--------------+----------+
| None | -1 | None |
+----------+--------------+----------+
|| CidrIps ||
|+----------------------------------+|
|| 10.200.0.0/16 ||
|| 10.240.0.0/24 ||
|+----------------------------------+|
|| GroupIds ||
|+----------------------------------+|
|| sg-b33811c3 ||
|+----------------------------------+|
+----------+--------------+----------+
| FromPort | IpProtocol | ToPort |
+----------+--------------+----------+
| 22 | tcp | 22 |
+----------+--------------+----------+
|| CidrIps ||
|+----------------------------------+|
|| 0.0.0.0/0 ||
|+----------------------------------+|
+----------+--------------+----------+
| FromPort | IpProtocol | ToPort |
+----------+--------------+----------+
| 6443 | tcp | 6443 |
+----------+--------------+----------+
|| CidrIps ||
|+----------------------------------+|
|| 0.0.0.0/0 ||
|+----------------------------------+|
+----------+--------------+----------+
| FromPort | IpProtocol | ToPort |
+----------+--------------+----------+
| 3 | icmp | 4 |
+----------+--------------+----------+
|| CidrIps ||
|+----------------------------------+|
|| 0.0.0.0/0 ||
|+----------------------------------+|
```
</details>
### Kubernetes Public IP Address ### Kubernetes Public IP Address
<details open>
<summary>GCP</summary>
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers: Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
``` ```
@ -90,14 +336,120 @@ NAME REGION ADDRESS STATUS
kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
``` ```
</details>
<details>
<summary>AWS</summary>
```
aws elb create-load-balancer \
--load-balancer-name kubernetes-the-hard-way \
--listeners Protocol=TCP,LoadBalancerPort=6443,InstanceProtocol=TCP,InstancePort=6443 \
--subnets "$SUBNET_ID" \
--security-groups "$SECURITY_GROUP_ID" \
--profile kubernetes-the-hard-way
```
> output
```
{
"DNSName": "kubernetes-the-hard-way-382204365.us-west-2.elb.amazonaws.com"
}
```
</details>
## Compute Instances ## Compute Instances
The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 18.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process. The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 18.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
<details>
<summary>AWS</summary>
### Create Instance IAM Policies
```
cat >kubernetes-iam-role.json <<EOF
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}]
}
EOF
aws iam create-role \
--role-name kubernetes-the-hard-way \
--assume-role-policy-document file://kubernetes-iam-role.json \
--profile kubernetes-the-hard-way
cat >kubernetes-iam-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Resource": "*",
"Action": [
"ec2:*",
"elasticloadbalancing:*",
"route53:*",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:BatchGetImage"
]
}]
}
EOF
aws iam put-role-policy \
--role-name kubernetes-the-hard-way \
--policy-name kubernetes-the-hard-way \
--policy-document file://kubernetes-iam-policy.json \
--profile kubernetes-the-hard-way
aws iam create-instance-profile \
--instance-profile-name kubernetes-the-hard-way \
--profile kubernetes-the-hard-way
aws iam add-role-to-instance-profile \
--instance-profile-name kubernetes-the-hard-way \
--role-name kubernetes-the-hard-way \
--profile kubernetes-the-hard-way
```
### Chosing an Image
```
IMAGE_ID="$(aws ec2 describe-images \
--owners 099720109477 \
--region "$(aws configure get region --profile kubernetes-the-hard-way)" \
--filters \
Name=root-device-type,Values=ebs \
Name=architecture,Values=x86_64 \
'Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*' \
--profile kubernetes-the-hard-way \
--query 'sort_by(Images,&Name)[-1].ImageId' \
--output text)"
```
</details>
### Kubernetes Controllers ### Kubernetes Controllers
Create three compute instances which will host the Kubernetes control plane: Create three compute instances which will host the Kubernetes control plane:
<details open>
<summary>GCP</summary>
``` ```
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute instances create controller-${i} \ gcloud compute instances create controller-${i} \
@ -114,6 +466,47 @@ for i in 0 1 2; do
done done
``` ```
</details>
<details>
<summary>AWS</summary>
```
# For ssh access to ec2 machines.
aws ec2 create-key-pair \
--key-name kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query KeyMaterial \
--output text >~/.ssh/kubernetes-the-hard-way
chmod 600 ~/.ssh/kubernetes-the-hard-way
for i in 0 1 2; do
instance_id="$(aws ec2 run-instances \
--associate-public-ip-address \
--iam-instance-profile Name=kubernetes-the-hard-way \
--image-id "$IMAGE_ID" \
--count 1 \
--key-name kubernetes-the-hard-way \
--security-group-ids "$SECURITY_GROUP_ID" \
--instance-type t2.small \
--private-ip-address "10.240.0.1$i" \
--subnet-id "$SUBNET_ID" \
--user-data "name=controller-$i" \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=controller-$i},{Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared}]" \
--profile kubernetes-the-hard-way \
--query 'Instances[].InstanceId' \
--output text)"
aws ec2 modify-instance-attribute \
--instance-id "$instance_id" \
--no-source-dest-check \
--profile kubernetes-the-hard-way
done
```
</details>
### Kubernetes Workers ### Kubernetes Workers
Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The `pod-cidr` instance metadata will be used to expose pod subnet allocations to compute instances at runtime. Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The `pod-cidr` instance metadata will be used to expose pod subnet allocations to compute instances at runtime.
@ -122,6 +515,9 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste
Create three compute instances which will host the Kubernetes worker nodes: Create three compute instances which will host the Kubernetes worker nodes:
<details open>
<summary>GCP</summary>
``` ```
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute instances create worker-${i} \ gcloud compute instances create worker-${i} \
@ -139,10 +535,45 @@ for i in 0 1 2; do
done done
``` ```
</details>
<details>
<summary>AWS</summary>
```
for i in 0 1 2; do
instance_id="$(aws ec2 run-instances \
--associate-public-ip-address \
--iam-instance-profile Name=kubernetes-the-hard-way \
--image-id "$IMAGE_ID" \
--count 1 \
--key-name kubernetes-the-hard-way \
--security-group-ids "$SECURITY_GROUP_ID" \
--instance-type t2.small \
--private-ip-address "10.240.0.2$i" \
--subnet-id "$SUBNET_ID" \
--user-data "name=worker-$i|pod-cidr=10.200.$i.0/24" \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=worker-$i},{Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared}]" \
--profile kubernetes-the-hard-way \
--query 'Instances[].InstanceId' \
--output text)"
aws ec2 modify-instance-attribute \
--instance-id "$instance_id" \
--no-source-dest-check \
--profile kubernetes-the-hard-way
done
```
</details>
### Verification ### Verification
List the compute instances in your default compute zone: List the compute instances in your default compute zone:
<details open>
<summary>GCP</summary>
``` ```
gcloud compute instances list gcloud compute instances list
``` ```
@ -159,8 +590,43 @@ worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX
worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING
``` ```
</details>
<details>
<summary>AWS</summary>
```
aws ec2 describe-instances \
--filters \
Name=instance-state-name,Values=running \
Name=vpc-id,Values="$VPC_ID" \
--profile kubernetes-the-hard-way \
--query 'Reservations[].Instances[]|sort_by(@, &Tags[?Key==`Name`]|[0].Value)[].[Tags[?Key==`Name`]|[0].Value,InstanceId,Placement.AvailabilityZone,PrivateIpAddress,PublicIpAddress]' \
--output table
```
> output
```
----------------------------------------------------------------------------------------
| DescribeInstances |
+--------------+-----------------------+-------------+--------------+------------------+
| controller-0| i-07c33497b7e6ee5ce | us-west-2a | 10.240.0.10 | 34.216.239.194 |
| controller-1| i-099ffe8ec525f6bdb | us-west-2a | 10.240.0.11 | 54.186.157.115 |
| controller-2| i-00c1800423320d12f | us-west-2a | 10.240.0.12 | 52.12.162.200 |
| worker-0 | i-00020c75b6703aa99 | us-west-2a | 10.240.0.20 | 54.212.17.18 |
| worker-1 | i-0bf4c8f9f36012d0e | us-west-2a | 10.240.0.21 | 34.220.143.249 |
| worker-2 | i-0b4d2dd686ddd1e1a | us-west-2a | 10.240.0.22 | 35.165.251.149 |
+--------------+-----------------------+-------------+--------------+------------------+
```
</details>
## Configuring SSH Access ## Configuring SSH Access
<details open>
<summary>GCP</summary>
SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as describe in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation. SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as describe in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation.
Test SSH access to the `controller-0` compute instances: Test SSH access to the `controller-0` compute instances:
@ -227,4 +693,27 @@ logout
Connection to XX.XXX.XXX.XXX closed Connection to XX.XXX.XXX.XXX closed
``` ```
</details>
<details>
<summary>AWS</summary>
```
get_ip() {
aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag:Name,Values="$1" \
--profile kubernetes-the-hard-way \
--query 'Reservations[0].Instances[0].PublicIpAddress' \
--output text
}
```
```
ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip controller-0)"
```
</details>
<p></p>
Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md) Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)

View File

@ -111,6 +111,9 @@ Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/doc
Generate a certificate and private key for each Kubernetes worker node: Generate a certificate and private key for each Kubernetes worker node:
<details open>
<summary>GCP</summary>
``` ```
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF cat > ${instance}-csr.json <<EOF
@ -148,6 +151,65 @@ cfssl gencert \
done done
``` ```
</details>
<details>
<summary>AWS</summary>
```
VPC_ID="$(aws ec2 describe-vpcs \
--filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'Vpcs[0].VpcId' \
--output text)"
```
```
for i in 0 1 2; do
instance="worker-$i"
hostname="ip-10-240-0-2$i"
cut -c3- >"$instance-csr.json" <<EOF
{
"CN": "system:node:$hostname",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
INT_EXT_IP="$(aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag:Name,Values="$instance" \
--profile kubernetes-the-hard-way \
--query 'Reservations[0].Instances[0].[PrivateIpAddress,PublicIpAddress]' \
--output text)"
INTERNAL_IP="$(echo "$INT_EXT_IP"|cut -f1)"
EXTERNAL_IP="$(echo "$INT_EXT_IP"|cut -f2)"
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname="$hostname,$EXTERNAL_IP,$INTERNAL_IP" \
-profile=kubernetes \
"$instance-csr.json"|cfssljson -bare "$instance"
done
```
</details>
<p></p>
Results: Results:
``` ```
@ -296,6 +358,9 @@ The `kubernetes-the-hard-way` static IP address will be included in the list of
Generate the Kubernetes API Server certificate and private key: Generate the Kubernetes API Server certificate and private key:
<details open>
<summary>GCP</summary>
``` ```
{ {
@ -333,6 +398,49 @@ cfssl gencert \
} }
``` ```
</details>
<details>
<summary>AWS</summary>
```
KUBERNETES_PUBLIC_ADDRESS="$(aws elb describe-load-balancers \
--load-balancer-name kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'LoadBalancerDescriptions[0].DNSName' \
--output text)"
cat >kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,ip-10-240-0-10,ip-10-240-0-11,ip-10-240-0-12,$KUBERNETES_PUBLIC_ADDRESS,127.0.0.1,kubernetes.default \
-profile=kubernetes \
kubernetes-csr.json|cfssljson -bare kubernetes
```
</details>
<p></p>
Results: Results:
``` ```
@ -390,14 +498,46 @@ service-account.pem
Copy the appropriate certificates and private keys to each worker instance: Copy the appropriate certificates and private keys to each worker instance:
<details open>
<summary>GCP</summary>
``` ```
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/ gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done done
``` ```
</details>
<details>
<summary>AWS</summary>
```
get_ip() {
aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag:Name,Values="$1" \
--profile kubernetes-the-hard-way \
--query 'Reservations[0].Instances[0].PublicIpAddress' \
--output text
}
```
```
for instance in worker-0 worker-1 worker-2; do
scp -i ~/.ssh/kubernetes-the-hard-way -o StrictHostKeyChecking=no \
ca.pem "$instance-key.pem" "$instance.pem" "ubuntu@$(get_ip "$instance"):~/"
done
```
</details>
<p></p>
Copy the appropriate certificates and private keys to each controller instance: Copy the appropriate certificates and private keys to each controller instance:
<details open>
<summary>GCP</summary>
``` ```
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
@ -405,6 +545,22 @@ for instance in controller-0 controller-1 controller-2; do
done done
``` ```
</details>
<details>
<summary>AWS</summary>
```
for instance in controller-0 controller-1 controller-2; do
scp -i ~/.ssh/kubernetes-the-hard-way -o StrictHostKeyChecking=no \
ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem \
"ubuntu@$(get_ip "$instance"):~/"
done
```
</details>
<p></p>
> The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab. > The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md) Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)

View File

@ -12,18 +12,39 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
Retrieve the `kubernetes-the-hard-way` static IP address: Retrieve the `kubernetes-the-hard-way` static IP address:
<details open>
<summary>GCP</summary>
``` ```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--format 'value(address)') --format 'value(address)')
``` ```
</details>
<details>
<summary>AWS</summary>
```
KUBERNETES_PUBLIC_ADDRESS="$(aws elb describe-load-balancers \
--load-balancer-name kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'LoadBalancerDescriptions[0].DNSName' \
--output text)"
```
</details>
### The kubelet Kubernetes Configuration File ### The kubelet Kubernetes Configuration File
When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/). When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
Generate a kubeconfig file for each worker node: Generate a kubeconfig file for each worker node:
<details open>
<summary>GCP</summary>
``` ```
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
@ -47,6 +68,41 @@ for instance in worker-0 worker-1 worker-2; do
done done
``` ```
</details>
<details>
<summary>AWS</summary>
```
for i in 0 1 2; do
instance="worker-$i"
hostname="ip-10-240-0-2$i"
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server="https://$KUBERNETES_PUBLIC_ADDRESS:6443" \
--kubeconfig="$instance.kubeconfig"
kubectl config set-credentials "system:node:$hostname" \
--client-certificate="$instance.pem" \
--client-key="$instance-key.pem" \
--embed-certs=true \
--kubeconfig="$instance.kubeconfig"
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user="system:node:$hostname" \
--kubeconfig="$instance.kubeconfig"
kubectl config use-context default \
--kubeconfig="$instance.kubeconfig"
done
```
</details>
<p></p>
Results: Results:
``` ```
@ -195,18 +251,72 @@ admin.kubeconfig
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance: Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
<details open>
<summary>GCP</summary>
``` ```
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done done
``` ```
</details>
<details>
<summary>AWS</summary>
```
VPC_ID="$(aws ec2 describe-vpcs \
--filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'Vpcs[0].VpcId' \
--output text)"
get_ip() {
aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag:Name,Values="$1" \
--profile kubernetes-the-hard-way \
--query 'Reservations[0].Instances[0].PublicIpAddress' \
--output text
}
```
```
for instance in worker-0 worker-1 worker-2; do
scp -i ~/.ssh/kubernetes-the-hard-way -o StrictHostKeyChecking=no \
"$instance.kubeconfig" kube-proxy.kubeconfig "ubuntu@$(get_ip "$instance"):~/"
done
```
</details>
<p></p>
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance: Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
<details open>
<summary>GCP</summary>
``` ```
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/ gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done done
``` ```
</details>
<details>
<summary>AWS</summary>
```
for instance in controller-0 controller-1 controller-2; do
scp -i ~/.ssh/kubernetes-the-hard-way -o StrictHostKeyChecking=no \
admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig \
"ubuntu@$(get_ip "$instance"):~/"
done
```
</details>
<p></p>
Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md) Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)

View File

@ -34,10 +34,45 @@ EOF
Copy the `encryption-config.yaml` encryption config file to each controller instance: Copy the `encryption-config.yaml` encryption config file to each controller instance:
<details open>
<summary>GCP</summary>
``` ```
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/ gcloud compute scp encryption-config.yaml ${instance}:~/
done done
``` ```
</details>
<details>
<summary>AWS</summary>
```
VPC_ID="$(aws ec2 describe-vpcs \
--filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'Vpcs[0].VpcId' \
--output text)"
get_ip() {
aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag:Name,Values="$1" \
--profile kubernetes-the-hard-way \
--query 'Reservations[0].Instances[0].PublicIpAddress' \
--output text
}
```
```
for instance in controller-0 controller-1 controller-2; do
scp -i ~/.ssh/kubernetes-the-hard-way -o StrictHostKeyChecking=no \
encryption-config.yaml "ubuntu@$(get_ip "$instance"):~/"
done
```
</details>
<p></p>
Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md) Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md)

View File

@ -6,10 +6,41 @@ Kubernetes components are stateless and store cluster state in [etcd](https://gi
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
<details open>
<summary>GCP</summary>
``` ```
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
</details>
<details>
<summary>AWS</summary>
```
VPC_ID="$(aws ec2 describe-vpcs \
--filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'Vpcs[0].VpcId' \
--output text)"
get_ip() {
aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag:Name,Values="$1" \
--profile kubernetes-the-hard-way \
--query 'Reservations[0].Instances[0].PublicIpAddress' \
--output text
}
```
```
ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip controller-0)"
```
</details>
### Running commands in parallel with tmux ### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab. [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
@ -45,17 +76,47 @@ Extract and install the `etcd` server and the `etcdctl` command line utility:
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance: The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
<details open>
<summary>GCP</summary>
``` ```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
</details>
<details>
<summary>AWS</summary>
```
INTERNAL_IP="$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)"
```
</details>
<p></p>
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance: Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
<details open>
<summary>GCP</summary>
``` ```
ETCD_NAME=$(hostname -s) ETCD_NAME=$(hostname -s)
``` ```
</details>
<details>
<summary>AWS</summary>
```
ETCD_NAME="$(curl -s http://169.254.169.254/latest/user-data/|tr '|' '\n'|grep '^name='|cut -d= -f2)"
```
</details>
<p></p>
Create the `etcd.service` systemd unit file: Create the `etcd.service` systemd unit file:
``` ```

View File

@ -6,10 +6,41 @@ In this lab you will bootstrap the Kubernetes control plane across three compute
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
<details open>
<summary>GCP</summary>
``` ```
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
</details>
<details>
<summary>AWS</summary>
```
VPC_ID="$(aws ec2 describe-vpcs \
--filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'Vpcs[0].VpcId' \
--output text)"
get_ip() {
aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag:Name,Values="$1" \
--profile kubernetes-the-hard-way \
--query 'Reservations[0].Instances[0].PublicIpAddress' \
--output text
}
```
```
ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip controller-0)"
```
</details>
### Running commands in parallel with tmux ### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab. [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
@ -57,11 +88,26 @@ Install the Kubernetes binaries:
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance: The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
<details open>
<summary>GCP</summary>
``` ```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
</details>
<details>
<summary>AWS</summary>
```
INTERNAL_IP="$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)"
```
</details>
<p></p>
Create the `kube-apiserver.service` systemd unit file: Create the `kube-apiserver.service` systemd unit file:
``` ```
@ -119,6 +165,9 @@ sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
Create the `kube-controller-manager.service` systemd unit file: Create the `kube-controller-manager.service` systemd unit file:
<details open>
<summary>GCP</summary>
``` ```
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit] [Unit]
@ -147,6 +196,41 @@ WantedBy=multi-user.target
EOF EOF
``` ```
</details>
<details>
<summary>AWS</summary>
```
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=10.200.0.0/16 \\
--cluster-name=kubernetes-the-hard-way \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
--cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.pem \\
--service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
--service-cluster-ip-range=10.32.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
```
</details>
### Configure the Kubernetes Scheduler ### Configure the Kubernetes Scheduler
Move the `kube-scheduler` kubeconfig into place: Move the `kube-scheduler` kubeconfig into place:
@ -202,6 +286,9 @@ EOF
### Enable HTTP Health Checks ### Enable HTTP Health Checks
<details open>
<summary>GCP</summary>
A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`. A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
> The `/healthz` API server endpoint does not require authentication by default. > The `/healthz` API server endpoint does not require authentication by default.
@ -243,6 +330,8 @@ sudo systemctl restart nginx
sudo systemctl enable nginx sudo systemctl enable nginx
``` ```
</details>
### Verification ### Verification
``` ```
@ -260,10 +349,13 @@ etcd-1 Healthy {"health": "true"}
Test the nginx HTTP health check proxy: Test the nginx HTTP health check proxy:
<details open>
<summary>GCP</summary>
``` ```
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
``` ```
> output
``` ```
HTTP/1.1 200 OK HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu) Server: nginx/1.14.0 (Ubuntu)
@ -275,6 +367,30 @@ Connection: keep-alive
ok ok
``` ```
</details>
<details>
<summary>AWS</summary>
```
curl -i \
--cacert /var/lib/kubernetes/ca.pem \
-H "Host: kubernetes.default.svc.cluster.local" \
https://127.0.0.1:6443/healthz
```
> output
```
HTTP/2 200
content-type: text/plain; charset=utf-8
content-length: 2
date: Tue, 31 Jul 2018 15:47:02 GMT
ok
```
</details>
<p></p>
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`. > Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
## RBAC for Kubelet Authorization ## RBAC for Kubelet Authorization
@ -283,10 +399,25 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
> This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization. > This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
<details open>
<summary>GCP</summary>
``` ```
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
</details>
<details>
<summary>AWS</summary>
```
ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip controller-0)"
```
</details>
<p></p>
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
``` ```
@ -346,6 +477,9 @@ In this section you will provision an external load balancer to front the Kubern
Create the external load balancer network resources: Create the external load balancer network resources:
<details open>
<summary>GCP</summary>
``` ```
{ {
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
@ -376,16 +510,62 @@ Create the external load balancer network resources:
} }
``` ```
</details>
<details>
<summary>AWS</summary>
```
get_instance_id() {
aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag:Name,Values="$1" \
--profile kubernetes-the-hard-way \
--query 'Reservations[0].Instances[0].InstanceId' \
--output text
}
aws elb register-instances-with-load-balancer \
--load-balancer-name kubernetes-the-hard-way \
--instances \
"$(get_instance_id controller-0)" \
"$(get_instance_id controller-1)" \
"$(get_instance_id controller-2)" \
--profile kubernetes-the-hard-way
```
</details>
### Verification ### Verification
Retrieve the `kubernetes-the-hard-way` static IP address: Retrieve the `kubernetes-the-hard-way` static IP address:
<details open>
<summary>GCP</summary>
``` ```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--format 'value(address)') --format 'value(address)')
``` ```
</details>
<details>
<summary>AWS</summary>
```
KUBERNETES_PUBLIC_ADDRESS="$(aws elb describe-load-balancers \
--load-balancer-name kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'LoadBalancerDescriptions[0].DNSName' \
--output text)"
```
</details>
<p></p>
Make a HTTP request for the Kubernetes version info: Make a HTTP request for the Kubernetes version info:
``` ```

View File

@ -6,10 +6,41 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
<details open>
<summary>GCP</summary>
``` ```
gcloud compute ssh worker-0 gcloud compute ssh worker-0
``` ```
</details>
<details>
<summary>AWS</summary>
```
VPC_ID="$(aws ec2 describe-vpcs \
--filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'Vpcs[0].VpcId' \
--output text)"
get_ip() {
aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag:Name,Values="$1" \
--profile kubernetes-the-hard-way \
--query 'Reservations[0].Instances[0].PublicIpAddress' \
--output text
}
```
```
ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip worker-0)"
```
</details>
### Running commands in parallel with tmux ### Running commands in parallel with tmux
[tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab. [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
@ -70,11 +101,26 @@ Install the worker binaries:
Retrieve the Pod CIDR range for the current compute instance: Retrieve the Pod CIDR range for the current compute instance:
<details open>
<summary>GCP</summary>
``` ```
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
``` ```
</details>
<details>
<summary>AWS</summary>
```
POD_CIDR="$(curl -s http://169.254.169.254/latest/user-data/|tr '|' '\n'|grep '^pod-cidr='|cut -d= -f2)"
```
</details>
<p></p>
Create the `bridge` network configuration file: Create the `bridge` network configuration file:
``` ```
@ -162,6 +208,9 @@ EOF
### Configure the Kubelet ### Configure the Kubelet
<details open>
<summary>GCP</summary>
``` ```
{ {
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/ sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
@ -170,8 +219,26 @@ EOF
} }
``` ```
</details>
<details>
<summary>AWS</summary>
```
WORKER_NAME="$(curl -s http://169.254.169.254/latest/user-data/|tr '|' '\n'|grep '^name='|cut -d= -f2)"
sudo mv "$WORKER_NAME-key.pem" "$WORKER_NAME.pem" /var/lib/kubelet/
sudo mv "$WORKER_NAME.kubeconfig" /var/lib/kubelet/kubeconfig
sudo mv ca.pem /var/lib/kubernetes/
```
</details>
<p></p>
Create the `kubelet-config.yaml` configuration file: Create the `kubelet-config.yaml` configuration file:
<details open>
<summary>GCP</summary>
``` ```
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration kind: KubeletConfiguration
@ -195,6 +262,37 @@ tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF EOF
``` ```
</details>
<details>
<summary>AWS</summary>
```
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "$POD_CIDR"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/$WORKER_NAME.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/$WORKER_NAME-key.pem"
EOF
```
</details>
<p></p>
Create the `kubelet.service` systemd unit file: Create the `kubelet.service` systemd unit file:
``` ```
@ -279,11 +377,27 @@ EOF
List the registered Kubernetes nodes: List the registered Kubernetes nodes:
<details open>
<summary>GCP</summary>
``` ```
gcloud compute ssh controller-0 \ gcloud compute ssh controller-0 \
--command "kubectl get nodes --kubeconfig admin.kubeconfig" --command "kubectl get nodes --kubeconfig admin.kubeconfig"
``` ```
</details>
<details>
<summary>AWS</summary>
```
ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip controller-0)" \
"kubectl get nodes --kubeconfig admin.kubeconfig"
```
</details>
<p></p>
> output > output
``` ```

View File

@ -10,6 +10,9 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
Generate a kubeconfig file suitable for authenticating as the `admin` user: Generate a kubeconfig file suitable for authenticating as the `admin` user:
<details open>
<summary>GCP</summary>
``` ```
{ {
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
@ -33,6 +36,36 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
} }
``` ```
</details>
<details>
<summary>AWS</summary>
```
KUBERNETES_PUBLIC_ADDRESS="$(aws elb describe-load-balancers \
--load-balancer-name kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'LoadBalancerDescriptions[0].DNSName' \
--output text)"
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server="https://$KUBERNETES_PUBLIC_ADDRESS:6443"
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
```
</details>
## Verification ## Verification
Check the health of the remote Kubernetes cluster: Check the health of the remote Kubernetes cluster:

View File

@ -12,6 +12,9 @@ In this section you will gather the information required to create routes in the
Print the internal IP address and Pod CIDR range for each worker instance: Print the internal IP address and Pod CIDR range for each worker instance:
<details open>
<summary>GCP</summary>
``` ```
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \ gcloud compute instances describe ${instance} \
@ -19,6 +22,50 @@ for instance in worker-0 worker-1 worker-2; do
done done
``` ```
</details>
<details>
<summary>AWS</summary>
```
VPC_ID="$(aws ec2 describe-vpcs \
--filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'Vpcs[0].VpcId' \
--output text)"
```
```
for i in 0 1 2; do
instance_id="$(aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag:Name,Values="worker-$i" \
--profile kubernetes-the-hard-way \
--query 'Reservations[0].Instances[0].InstanceId' \
--output text)"
instance_ip="$(aws ec2 describe-instances \
--instance-ids "$instance_id" \
--profile kubernetes-the-hard-way \
--query 'Reservations[0].Instances[0].PrivateIpAddress' \
--output text)"
instance_ud="$(aws ec2 describe-instance-attribute \
--instance-id "$instance_id" \
--attribute userData \
--profile kubernetes-the-hard-way \
--query UserData.Value \
--output text|base64 --decode)"
pod_cidr="$(echo "$instance_ud"|tr '|' '\n'|grep '^pod-cidr='|cut -d= -f2)"
echo "$instance_ip $pod_cidr"
done
```
</details>
<p></p>
> output > output
``` ```
@ -31,6 +78,9 @@ done
Create network routes for each worker instance: Create network routes for each worker instance:
<details open>
<summary>GCP</summary>
``` ```
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
@ -40,14 +90,79 @@ for i in 0 1 2; do
done done
``` ```
</details>
<details>
<summary>AWS</summary>
```
ROUTE_TABLE_ID="$(aws ec2 describe-route-tables \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'RouteTables[0].RouteTableId' \
--output text)"
for i in 0 1 2; do
instance_id="$(aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag:Name,Values="worker-$i" \
--profile kubernetes-the-hard-way \
--query 'Reservations[0].Instances[0].InstanceId' \
--output text)"
instance_ud="$(aws ec2 describe-instance-attribute \
--instance-id "$instance_id" \
--attribute userData \
--profile kubernetes-the-hard-way \
--query UserData.Value \
--output text|base64 --decode)"
pod_cidr="$(echo "$instance_ud"|tr '|' '\n'|grep '^pod-cidr='|cut -d= -f2)"
aws ec2 create-route \
--route-table-id "$ROUTE_TABLE_ID" \
--destination-cidr-block "$pod_cidr" \
--instance-id "$instance_id" \
--profile kubernetes-the-hard-way
done
```
</details>
<p></p>
List the routes in the `kubernetes-the-hard-way` VPC network: List the routes in the `kubernetes-the-hard-way` VPC network:
<details open>
<summary>GCP</summary>
``` ```
gcloud compute routes list --filter "network: kubernetes-the-hard-way" gcloud compute routes list --filter "network: kubernetes-the-hard-way"
``` ```
</details>
<details>
<summary>AWS</summary>
```
aws ec2 describe-route-tables \
--route-table-id "$ROUTE_TABLE_ID" \
--profile kubernetes-the-hard-way \
--query 'RouteTables[0].Routes[]|sort_by(@, &DestinationCidrBlock)[].[InstanceId,DestinationCidrBlock,GatewayId]' \
--output table
```
</details>
<p></p>
> output > output
<details open>
<summary>GCP</summary>
``` ```
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
default-route-236a40a8bc992b5b kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 default-route-236a40a8bc992b5b kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
@ -57,4 +172,24 @@ kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000 kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
``` ```
</details>
<details>
<summary>AWS</summary>
```
----------------------------------------------------------
| DescribeRouteTables |
+---------------------+-----------------+----------------+
| None | 0.0.0.0/0 | igw-116a3177 |
| i-0d173dd08280c9f52| 10.200.0.0/24 | None |
| i-0a4ae7e79b0bc3cc9| 10.200.1.0/24 | None |
| i-0a424b69034b9068f| 10.200.2.0/24 | None |
| None | 10.240.0.0/24 | local |
+---------------------+-----------------+----------------+
```
</details>
<p></p>
Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md) Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)

View File

@ -15,6 +15,9 @@ kubectl create secret generic kubernetes-the-hard-way \
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
<details open>
<summary>GCP</summary>
``` ```
gcloud compute ssh controller-0 \ gcloud compute ssh controller-0 \
--command "sudo ETCDCTL_API=3 etcdctl get \ --command "sudo ETCDCTL_API=3 etcdctl get \
@ -25,6 +28,41 @@ gcloud compute ssh controller-0 \
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C" /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
``` ```
</details>
<details>
<summary>AWS</summary>
```
VPC_ID="$(aws ec2 describe-vpcs \
--filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'Vpcs[0].VpcId' \
--output text)"
get_ip() {
aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag:Name,Values="$1" \
--profile kubernetes-the-hard-way \
--query 'Reservations[0].Instances[0].PublicIpAddress' \
--output text
}
```
```
ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip controller-0)" \
sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem \
/registry/secrets/default/kubernetes-the-hard-way|hexdump -C
```
</details>
<p></p>
> output > output
``` ```
@ -176,19 +214,62 @@ NODE_PORT=$(kubectl get svc nginx \
Create a firewall rule that allows remote access to the `nginx` node port: Create a firewall rule that allows remote access to the `nginx` node port:
<details open>
<summary>GCP</summary>
``` ```
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \ --allow=tcp:${NODE_PORT} \
--network kubernetes-the-hard-way --network kubernetes-the-hard-way
``` ```
</details>
<details>
<summary>AWS</summary>
```
SECURITY_GROUP_ID="$(aws ec2 describe-security-groups \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'SecurityGroups[0].GroupId' \
--output text)"
aws ec2 authorize-security-group-ingress \
--group-id "$SECURITY_GROUP_ID" \
--protocol tcp \
--port "$NODE_PORT" \
--cidr 0.0.0.0/0 \
--profile kubernetes-the-hard-way
```
</details>
<p></p>
Retrieve the external IP address of a worker instance: Retrieve the external IP address of a worker instance:
<details open>
<summary>GCP</summary>
``` ```
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)') --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
``` ```
</details>
<details>
<summary>AWS</summary>
```
EXTERNAL_IP="$(get_ip worker-0)"
```
</details>
<p></p>
Make an HTTP request using the external IP address and the `nginx` node port: Make an HTTP request using the external IP address and the `nginx` node port:
``` ```
@ -249,16 +330,54 @@ untrusted 1/1 Running 0 10s 10.200.0.3
Get the node name where the `untrusted` pod is running: Get the node name where the `untrusted` pod is running:
<details open>
<summary>GCP</summary>
``` ```
INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}') INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')
``` ```
</details>
<details>
<summary>AWS</summary>
```
INSTANCE_PRIVATE_IP="$(kubectl get pod untrusted --output=jsonpath='{.status.hostIP}')"
```
</details>
<p></p>
SSH into the worker node: SSH into the worker node:
<details open>
<summary>GCP</summary>
``` ```
gcloud compute ssh ${INSTANCE_NAME} gcloud compute ssh ${INSTANCE_NAME}
``` ```
</details>
<details>
<summary>AWS</summary>
```
INSTANCE_PUBLIC_IP="$(aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=private-ip-address,Values="$INSTANCE_PRIVATE_IP" \
--profile kubernetes-the-hard-way \
--query 'Reservations[].Instances[].PublicIpAddress' \
--output text)"
ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$INSTANCE_PUBLIC_IP"
```
</details>
<p></p>
List the containers running under gVisor: List the containers running under gVisor:
``` ```

View File

@ -2,6 +2,9 @@
In this lab you will delete the compute resources created during this tutorial. In this lab you will delete the compute resources created during this tutorial.
<details open>
<summary>GCP</summary>
## Compute Instances ## Compute Instances
Delete the controller and worker compute instances: Delete the controller and worker compute instances:
@ -53,3 +56,117 @@ Delete the `kubernetes-the-hard-way` network VPC:
gcloud -q compute networks delete kubernetes-the-hard-way gcloud -q compute networks delete kubernetes-the-hard-way
} }
``` ```
</details>
<details>
<summary>AWS</summary>
```
VPC_ID="$(aws ec2 describe-vpcs \
--filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'Vpcs[0].VpcId' \
--output text)"
```
```
for host in controller-0 controller-1 controller-2 worker-0 worker-1 worker-2; do
INSTANCE_ID="$(aws ec2 describe-instances \
--filters \
Name=vpc-id,Values="$VPC_ID" \
Name=tag:Name,Values="$host" \
--profile kubernetes-the-hard-way \
--query 'Reservations[].Instances[].InstanceId' \
--output text)"
aws ec2 terminate-instances --instance-ids "$INSTANCE_ID" --profile kubernetes-the-hard-way
done
aws iam remove-role-from-instance-profile \
--instance-profile-name kubernetes-the-hard-way \
--role-name kubernetes-the-hard-way \
--profile kubernetes-the-hard-way
aws iam delete-instance-profile \
--instance-profile-name kubernetes-the-hard-way \
--profile kubernetes-the-hard-way
aws iam delete-role-policy \
--role-name kubernetes-the-hard-way \
--policy-name kubernetes-the-hard-way \
--profile kubernetes-the-hard-way
aws iam delete-role \
--role-name kubernetes-the-hard-way \
--profile kubernetes-the-hard-way
aws ec2 delete-key-pair \
--key-name kubernetes-the-hard-way \
--profile kubernetes-the-hard-way
# After all ec2 instances have been terminated.
aws elb delete-load-balancer \
--load-balancer-name kubernetes-the-hard-way \
--profile kubernetes-the-hard-way
INTERNET_GATEWAY_ID="$(aws ec2 describe-internet-gateways \
--filter Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'InternetGateways[0].InternetGatewayId' \
--output text)"
aws ec2 detach-internet-gateway \
--internet-gateway-id "$INTERNET_GATEWAY_ID" \
--vpc-id "$VPC_ID" \
--profile kubernetes-the-hard-way
aws ec2 delete-internet-gateway \
--internet-gateway-id "$INTERNET_GATEWAY_ID" \
--profile kubernetes-the-hard-way
SECURITY_GROUP_ID="$(aws ec2 describe-security-groups \
--filters Name=group-name,Values=kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'SecurityGroups[0].GroupId' \
--output text)"
aws ec2 delete-security-group \
--group-id "$SECURITY_GROUP_ID" \
--profile kubernetes-the-hard-way
SUBNET_ID="$(aws ec2 describe-subnets \
--filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'Subnets[0].SubnetId' \
--output text)"
aws ec2 delete-subnet \
--subnet-id "$SUBNET_ID" \
--profile kubernetes-the-hard-way
ROUTE_TABLE_ID="$(aws ec2 describe-route-tables \
--filter Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'RouteTables[0].RouteTableId' \
--output text)"
aws ec2 delete-route-table \
--route-table-id "$ROUTE_TABLE_ID" \
--profile kubernetes-the-hard-way
aws ec2 delete-vpc \
--vpc-id "$VPC_ID" \
--profile kubernetes-the-hard-way
DHCP_OPTION_SET_ID="$(aws ec2 describe-dhcp-options \
--filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \
--profile kubernetes-the-hard-way \
--query 'DhcpOptions[0].DhcpOptionsId' \
--output text)"
aws ec2 delete-dhcp-options \
--dhcp-options-id "$DHCP_OPTION_SET_ID" \
--profile kubernetes-the-hard-way
```
</details>