remove AWS support

pull/137/head
Kelsey Hightower 2017-03-25 14:20:31 -07:00
parent 818501707e
commit 4989117cf2
13 changed files with 10 additions and 836 deletions

View File

@ -2,10 +2,7 @@
This tutorial will walk you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Container Engine](https://cloud.google.com/container-engine), or the [Getting Started Guides](http://kubernetes.io/docs/getting-started-guides/). This tutorial will walk you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Container Engine](https://cloud.google.com/container-engine), or the [Getting Started Guides](http://kubernetes.io/docs/getting-started-guides/).
This tutorial is optimized for learning, which means taking the long route to help people understand each task required to bootstrap a Kubernetes cluster. This tutorial can be completed on the following platforms: This tutorial is optimized for learning, which means taking the long route to help people understand each task required to bootstrap a Kubernetes cluster. This tutorial can be completed on [Google Compute Engine](https://cloud.google.com/compute).
* [Google Compute Engine](https://cloud.google.com/compute)
* [Amazon EC2](https://aws.amazon.com/ec2)
> The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that prevent you from learning! > The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that prevent you from learning!

View File

@ -1,472 +0,0 @@
# Cloud Infrastructure Provisioning - Amazon Web Services
This lab will walk you through provisioning the compute instances required for running a H/A Kubernetes cluster. A total of 6 virtual machines will be created.
The guide uses the `us-west-2` region, but you can override that at the start.
After completing this guide you should have the following compute instances:
![EC2 Console](ec2-instances.png)
> All machines will be provisioned with fixed private IP addresses to simplify the bootstrap process.
To make our Kubernetes control plane remotely accessible, a public IP address will be provisioned and assigned to a Load Balancer that will sit in front of the 3 Kubernetes controllers.
## Networking
### VPC
```
AWS_REGION=us-west-2
```
```
VPC_ID=$(aws ec2 create-vpc \
--cidr-block 10.240.0.0/16 | \
jq -r '.Vpc.VpcId')
```
```
aws ec2 create-tags \
--resources ${VPC_ID} \
--tags Key=Name,Value=kubernetes
```
```
aws ec2 modify-vpc-attribute \
--vpc-id ${VPC_ID} \
--enable-dns-support '{"Value": true}'
```
```
aws ec2 modify-vpc-attribute \
--vpc-id ${VPC_ID} \
--enable-dns-hostnames '{"Value": true}'
```
### DHCP Option Sets
```
DHCP_OPTION_SET_ID=$(aws ec2 create-dhcp-options \
--dhcp-configuration "Key=domain-name,Values=$AWS_REGION.compute.internal" \
"Key=domain-name-servers,Values=AmazonProvidedDNS" | \
jq -r '.DhcpOptions.DhcpOptionsId')
```
```
aws ec2 create-tags \
--resources ${DHCP_OPTION_SET_ID} \
--tags Key=Name,Value=kubernetes
```
```
aws ec2 associate-dhcp-options \
--dhcp-options-id ${DHCP_OPTION_SET_ID} \
--vpc-id ${VPC_ID}
```
### Subnets
Create a subnet for the Kubernetes cluster:
```
SUBNET_ID=$(aws ec2 create-subnet \
--vpc-id ${VPC_ID} \
--cidr-block 10.240.0.0/24 | \
jq -r '.Subnet.SubnetId')
```
```
aws ec2 create-tags \
--resources ${SUBNET_ID} \
--tags Key=Name,Value=kubernetes
```
### Internet Gateways
```
INTERNET_GATEWAY_ID=$(aws ec2 create-internet-gateway | \
jq -r '.InternetGateway.InternetGatewayId')
```
```
aws ec2 create-tags \
--resources ${INTERNET_GATEWAY_ID} \
--tags Key=Name,Value=kubernetes
```
```
aws ec2 attach-internet-gateway \
--internet-gateway-id ${INTERNET_GATEWAY_ID} \
--vpc-id ${VPC_ID}
```
### Route Tables
```
ROUTE_TABLE_ID=$(aws ec2 create-route-table \
--vpc-id ${VPC_ID} | \
jq -r '.RouteTable.RouteTableId')
```
```
aws ec2 create-tags \
--resources ${ROUTE_TABLE_ID} \
--tags Key=Name,Value=kubernetes
```
```
aws ec2 associate-route-table \
--route-table-id ${ROUTE_TABLE_ID} \
--subnet-id ${SUBNET_ID}
```
```
aws ec2 create-route \
--route-table-id ${ROUTE_TABLE_ID} \
--destination-cidr-block 0.0.0.0/0 \
--gateway-id ${INTERNET_GATEWAY_ID}
```
### Firewall Rules
```
SECURITY_GROUP_ID=$(aws ec2 create-security-group \
--group-name kubernetes \
--description "Kubernetes security group" \
--vpc-id ${VPC_ID} | \
jq -r '.GroupId')
```
```
aws ec2 create-tags \
--resources ${SECURITY_GROUP_ID} \
--tags Key=Name,Value=kubernetes
```
```
aws ec2 authorize-security-group-ingress \
--group-id ${SECURITY_GROUP_ID} \
--protocol all
```
```
aws ec2 authorize-security-group-ingress \
--group-id ${SECURITY_GROUP_ID} \
--protocol all \
--port 0-65535 \
--cidr 10.240.0.0/16
```
```
aws ec2 authorize-security-group-ingress \
--group-id ${SECURITY_GROUP_ID} \
--protocol tcp \
--port 22 \
--cidr 0.0.0.0/0
```
```
aws ec2 authorize-security-group-ingress \
--group-id ${SECURITY_GROUP_ID} \
--protocol tcp \
--port 6443 \
--cidr 0.0.0.0/0
```
```
aws ec2 authorize-security-group-ingress \
--group-id ${SECURITY_GROUP_ID} \
--protocol all \
--source-group ${SECURITY_GROUP_ID}
```
### Kubernetes Public Address
An ELB will be used to load balance traffic across the Kubernetes control plane.
```
aws elb create-load-balancer \
--load-balancer-name kubernetes \
--listeners "Protocol=TCP,LoadBalancerPort=6443,InstanceProtocol=TCP,InstancePort=6443" \
--subnets ${SUBNET_ID} \
--security-groups ${SECURITY_GROUP_ID}
```
## Provision Virtual Machines
All the VMs in this lab will be provisioned using Ubuntu 16.04 mainly because it runs a newish Linux Kernel that has good support for Docker.
All virtual machines in this section will be created with the `--no-source-dest-check` flag to enable traffic between foreign subnets to flow. The will enable Pods to communicate with nodes and other Pods via the Kubernetes service IP.
### Create Instance IAM Policies
```
cat > kubernetes-iam-role.json <<'EOF'
{
"Version": "2012-10-17",
"Statement": [
{"Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com"}, "Action": "sts:AssumeRole"}
]
}
EOF
```
```
aws iam create-role \
--role-name kubernetes \
--assume-role-policy-document file://kubernetes-iam-role.json
```
```
cat > kubernetes-iam-policy.json <<'EOF'
{
"Version": "2012-10-17",
"Statement": [
{"Effect": "Allow", "Action": ["ec2:*"], "Resource": ["*"]},
{"Effect": "Allow", "Action": ["elasticloadbalancing:*"], "Resource": ["*"]},
{"Effect": "Allow", "Action": ["route53:*"], "Resource": ["*"]},
{"Effect": "Allow", "Action": ["ecr:*"], "Resource": "*"}
]
}
EOF
```
```
aws iam put-role-policy \
--role-name kubernetes \
--policy-name kubernetes \
--policy-document file://kubernetes-iam-policy.json
```
```
aws iam create-instance-profile \
--instance-profile-name kubernetes
```
```
aws iam add-role-to-instance-profile \
--instance-profile-name kubernetes \
--role-name kubernetes
```
### Chosing an Image
Pick the latest Ubuntu Xenial server
```
IMAGE_ID=$(aws ec2 describe-images --owners 099720109477 \
--region $AWS_REGION \
--filters Name=root-device-type,Values=ebs Name=architecture,Values=x86_64 'Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*' \
| jq -r '.Images|sort_by(.Name)[-1]|.ImageId')
```
### Generate A SSH Key Pair
```
aws ec2 create-key-pair --key-name kubernetes | \
jq -r '.KeyMaterial' > ~/.ssh/kubernetes_the_hard_way
```
```
chmod 600 ~/.ssh/kubernetes_the_hard_way
```
```
ssh-add ~/.ssh/kubernetes_the_hard_way
```
#### SSH Access
Once the virtual machines are created you'll be able to login into each machine using ssh like this:
```
WORKER_0_PUBLIC_IP_ADDRESS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=worker0" | \
jq -j '.Reservations[].Instances[].PublicIpAddress')
```
> The instance public IP address can also be obtained from the EC2 console. Each node will be tagged with a unique name.
```
ssh ubuntu@${WORKER_0_PUBLIC_IP_ADDRESS}
```
### Virtual Machines
#### Kubernetes Controllers
```
CONTROLLER_0_INSTANCE_ID=$(aws ec2 run-instances \
--associate-public-ip-address \
--iam-instance-profile 'Name=kubernetes' \
--image-id ${IMAGE_ID} \
--count 1 \
--key-name kubernetes \
--security-group-ids ${SECURITY_GROUP_ID} \
--instance-type t2.small \
--private-ip-address 10.240.0.10 \
--subnet-id ${SUBNET_ID} | \
jq -r '.Instances[].InstanceId')
```
```
aws ec2 modify-instance-attribute \
--instance-id ${CONTROLLER_0_INSTANCE_ID} \
--no-source-dest-check
```
```
aws ec2 create-tags \
--resources ${CONTROLLER_0_INSTANCE_ID} \
--tags Key=Name,Value=controller0
```
```
CONTROLLER_1_INSTANCE_ID=$(aws ec2 run-instances \
--associate-public-ip-address \
--iam-instance-profile 'Name=kubernetes' \
--image-id ${IMAGE_ID} \
--count 1 \
--key-name kubernetes \
--security-group-ids ${SECURITY_GROUP_ID} \
--instance-type t2.small \
--private-ip-address 10.240.0.11 \
--subnet-id ${SUBNET_ID} | \
jq -r '.Instances[].InstanceId')
```
```
aws ec2 modify-instance-attribute \
--instance-id ${CONTROLLER_1_INSTANCE_ID} \
--no-source-dest-check
```
```
aws ec2 create-tags \
--resources ${CONTROLLER_1_INSTANCE_ID} \
--tags Key=Name,Value=controller1
```
```
CONTROLLER_2_INSTANCE_ID=$(aws ec2 run-instances \
--associate-public-ip-address \
--iam-instance-profile 'Name=kubernetes' \
--image-id ${IMAGE_ID} \
--count 1 \
--key-name kubernetes \
--security-group-ids ${SECURITY_GROUP_ID} \
--instance-type t2.small \
--private-ip-address 10.240.0.12 \
--subnet-id ${SUBNET_ID} | \
jq -r '.Instances[].InstanceId')
```
```
aws ec2 modify-instance-attribute \
--instance-id ${CONTROLLER_2_INSTANCE_ID} \
--no-source-dest-check
```
```
aws ec2 create-tags \
--resources ${CONTROLLER_2_INSTANCE_ID} \
--tags Key=Name,Value=controller2
```
#### Kubernetes Workers
```
WORKER_0_INSTANCE_ID=$(aws ec2 run-instances \
--associate-public-ip-address \
--iam-instance-profile 'Name=kubernetes' \
--image-id ${IMAGE_ID} \
--count 1 \
--key-name kubernetes \
--security-group-ids ${SECURITY_GROUP_ID} \
--instance-type t2.small \
--private-ip-address 10.240.0.20 \
--subnet-id ${SUBNET_ID} | \
jq -r '.Instances[].InstanceId')
```
```
aws ec2 modify-instance-attribute \
--instance-id ${WORKER_0_INSTANCE_ID} \
--no-source-dest-check
```
```
aws ec2 create-tags \
--resources ${WORKER_0_INSTANCE_ID} \
--tags Key=Name,Value=worker0
```
```
WORKER_1_INSTANCE_ID=$(aws ec2 run-instances \
--associate-public-ip-address \
--iam-instance-profile 'Name=kubernetes' \
--image-id ${IMAGE_ID} \
--count 1 \
--key-name kubernetes \
--security-group-ids ${SECURITY_GROUP_ID} \
--instance-type t2.small \
--private-ip-address 10.240.0.21 \
--subnet-id ${SUBNET_ID} | \
jq -r '.Instances[].InstanceId')
```
```
aws ec2 modify-instance-attribute \
--instance-id ${WORKER_1_INSTANCE_ID} \
--no-source-dest-check
```
```
aws ec2 create-tags \
--resources ${WORKER_1_INSTANCE_ID} \
--tags Key=Name,Value=worker1
```
```
WORKER_2_INSTANCE_ID=$(aws ec2 run-instances \
--associate-public-ip-address \
--iam-instance-profile 'Name=kubernetes' \
--image-id ${IMAGE_ID} \
--count 1 \
--key-name kubernetes \
--security-group-ids ${SECURITY_GROUP_ID} \
--instance-type t2.small \
--private-ip-address 10.240.0.22 \
--subnet-id ${SUBNET_ID} | \
jq -r '.Instances[].InstanceId')
```
```
aws ec2 modify-instance-attribute \
--instance-id ${WORKER_2_INSTANCE_ID} \
--no-source-dest-check
```
```
aws ec2 create-tags \
--resources ${WORKER_2_INSTANCE_ID} \
--tags Key=Name,Value=worker2
```
## Verify
```
aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" "Name=vpc-id,Values=${VPC_ID}" | \
jq -j '.Reservations[].Instances[] | .InstanceId, " ", .Placement.AvailabilityZone, " ", .PrivateIpAddress, " ", .PublicIpAddress, "\n"'
```
```
i-ae714f73 us-west-2c 10.240.0.11 XX.XX.XX.XXX
i-f4714f29 us-west-2c 10.240.0.21 XX.XX.XXX.XXX
i-f6714f2b us-west-2c 10.240.0.12 XX.XX.XX.XX
i-e26e503f us-west-2c 10.240.0.22 XX.XX.XXX.XXX
i-e8714f35 us-west-2c 10.240.0.10 XX.XX.XXX.XXX
i-78704ea5 us-west-2c 10.240.0.20 XX.XX.XXX.XXX
```

View File

@ -1,8 +1,7 @@
# Cloud Infrastructure Provisioning # Cloud Infrastructure Provisioning
Kubernetes can be installed just about anywhere physical or virtual machines can be run. In this lab we are going to focus on [Google Cloud Platform](https://cloud.google.com/) and [Amazon Web Services](https://aws.amazon.com). Kubernetes can be installed just about anywhere physical or virtual machines can be run. In this lab we are going to focus on [Google Cloud Platform](https://cloud.google.com/).
This lab will walk you through provisioning the compute instances required for running a H/A Kubernetes cluster. This lab will walk you through provisioning the compute instances required for running a H/A Kubernetes cluster.
* [Cloud Infrastructure Provisioning - Google Cloud Platform](01-infrastructure-gcp.md) * [Cloud Infrastructure Provisioning - Google Cloud Platform](01-infrastructure-gcp.md)
* [Cloud Infrastructure Provisioning - Amazon Web Services](01-infrastructure-aws.md)

View File

@ -213,22 +213,12 @@ Set the Kubernetes Public IP Address
The Kubernetes public IP address will be included in the list of subject alternative names for the Kubernetes server certificate. This will ensure the TLS certificate is valid for remote client access. The Kubernetes public IP address will be included in the list of subject alternative names for the Kubernetes server certificate. This will ensure the TLS certificate is valid for remote client access.
#### GCE
``` ```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region us-central1 \ --region us-central1 \
--format 'value(address)') --format 'value(address)')
``` ```
#### AWS
```
KUBERNETES_PUBLIC_ADDRESS=$(aws elb describe-load-balancers \
--load-balancer-name kubernetes | \
jq -r '.LoadBalancerDescriptions[].DNSName')
```
--- ---
Create the kubernetes server certificate signing request: Create the kubernetes server certificate signing request:
@ -296,8 +286,6 @@ KUBERNETES_WORKERS=(worker0 worker1 worker2)
KUBERNETES_CONTROLLERS=(controller0 controller1 controller2) KUBERNETES_CONTROLLERS=(controller0 controller1 controller2)
``` ```
### GCE
The following command will: The following command will:
* Copy the TLS certificates and keys to each Kubernetes host using the `gcloud compute copy-files` command. * Copy the TLS certificates and keys to each Kubernetes host using the `gcloud compute copy-files` command.
@ -312,30 +300,4 @@ done
for host in ${KUBERNETES_CONTROLLERS[*]}; do for host in ${KUBERNETES_CONTROLLERS[*]}; do
gcloud compute copy-files ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem ${host}:~/ gcloud compute copy-files ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem ${host}:~/
done done
``` ```
### AWS
The following command will:
* Extract the public IP address for each Kubernetes host
* Copy the TLS certificates and keys to each Kubernetes host using `scp`
```
for host in ${KUBERNETES_WORKERS[*]}; do
PUBLIC_IP_ADDRESS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=${host}" | \
jq -r '.Reservations[].Instances[].PublicIpAddress')
scp -o "StrictHostKeyChecking no" ca.pem kube-proxy.pem kube-proxy-key.pem \
ubuntu@${PUBLIC_IP_ADDRESS}:~/
done
```
```
for host in ${KUBERNETES_CONTROLLERS[*]}; do
PUBLIC_IP_ADDRESS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=${host}" | \
jq -r '.Reservations[].Instances[].PublicIpAddress')
scp -o "StrictHostKeyChecking no" ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
ubuntu@${PUBLIC_IP_ADDRESS}:~/
done
```

View File

@ -54,26 +54,12 @@ Distribute the bootstrap token file to each controller node:
KUBERNETES_CONTROLLERS=(controller0 controller1 controller2) KUBERNETES_CONTROLLERS=(controller0 controller1 controller2)
``` ```
#### GCE
``` ```
for host in ${KUBERNETES_CONTROLLERS[*]}; do for host in ${KUBERNETES_CONTROLLERS[*]}; do
gcloud compute copy-files token.csv ${host}:~/ gcloud compute copy-files token.csv ${host}:~/
done done
``` ```
#### AWS
```
for host in ${KUBERNETES_CONTROLLERS[*]}; do
PUBLIC_IP_ADDRESS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=${host}" | \
jq -r '.Reservations[].Instances[].PublicIpAddress')
scp -o "StrictHostKeyChecking no" token.csv \
ubuntu@${PUBLIC_IP_ADDRESS}:~/
done
```
## Client Authentication Configs ## Client Authentication Configs
This section will walk you through creating kubeconfig files that will be used to bootstrap kubelets, which will then generate their own kubeconfigs based on dynamically generated certificates, and a kubeconfig for authenticating kube-proxy clients. This section will walk you through creating kubeconfig files that will be used to bootstrap kubelets, which will then generate their own kubeconfigs based on dynamically generated certificates, and a kubeconfig for authenticating kube-proxy clients.
@ -82,24 +68,12 @@ Each kubeconfig requires a Kubernetes master to connect to. To support H/A the I
### Set the Kubernetes Public Address ### Set the Kubernetes Public Address
#### GCE
``` ```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region us-central1 \ --region us-central1 \
--format 'value(address)') --format 'value(address)')
``` ```
#### AWS
```
KUBERNETES_PUBLIC_ADDRESS=$(aws elb describe-load-balancers \
--load-balancer-name kubernetes-the-hard-way | \
jq -r '.LoadBalancerDescriptions[].DNSName')
```
---
## Create client kubeconfig files ## Create client kubeconfig files
### Create the bootstrap kubeconfig file ### Create the bootstrap kubeconfig file
@ -165,22 +139,8 @@ kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
KUBERNETES_WORKERS=(worker0 worker1 worker2) KUBERNETES_WORKERS=(worker0 worker1 worker2)
``` ```
##### GCE
``` ```
for host in ${KUBERNETES_WORKERS[*]}; do for host in ${KUBERNETES_WORKERS[*]}; do
gcloud compute copy-files bootstrap.kubeconfig kube-proxy.kubeconfig ${host}:~/ gcloud compute copy-files bootstrap.kubeconfig kube-proxy.kubeconfig ${host}:~/
done done
``` ```
##### AWS
```
for host in ${KUBERNETES_WORKERS[*]}; do
PUBLIC_IP_ADDRESS=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=${host}" | \
jq -r '.Reservations[].Instances[].PublicIpAddress')
scp -o "StrictHostKeyChecking no" bootstrap.kubeconfig kube-proxy.kubeconfig \
ubuntu@${PUBLIC_IP_ADDRESS}:~/
done
```

View File

@ -63,21 +63,11 @@ sudo mkdir -p /var/lib/etcd
The internal IP address will be used by etcd to serve client requests and communicate with other etcd peers. The internal IP address will be used by etcd to serve client requests and communicate with other etcd peers.
#### GCE
``` ```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
#### AWS
```
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
```
---
Each etcd member must have a unique name within an etcd cluster. Set the etcd name: Each etcd member must have a unique name within an etcd cluster. Set the etcd name:
``` ```

View File

@ -84,19 +84,11 @@ sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/
Capture the internal IP address: Capture the internal IP address:
#### GCE
``` ```
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
#### AWS
```
INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
```
--- ---
Create the systemd unit file: Create the systemd unit file:
@ -284,9 +276,7 @@ etcd-2 Healthy {"health": "true"}
## Setup Kubernetes API Server Frontend Load Balancer ## Setup Kubernetes API Server Frontend Load Balancer
The virtual machines created in this tutorial will not have permission to complete this section. Run the following commands from the same place used to create the virtual machines for this tutorial. The virtual machines created in this tutorial will not have permission to complete this section. Run the following commands from the same place used to create the virtual machines for this tutorial.
### GCE
``` ```
gcloud compute http-health-checks create kube-apiserver-health-check \ gcloud compute http-health-checks create kube-apiserver-health-check \
@ -317,12 +307,4 @@ gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--ports 6443 \ --ports 6443 \
--target-pool kubernetes-target-pool \ --target-pool kubernetes-target-pool \
--region us-central1 --region us-central1
``` ```
### AWS
```
aws elb register-instances-with-load-balancer \
--load-balancer-name kubernetes \
--instances ${CONTROLLER_0_INSTANCE_ID} ${CONTROLLER_1_INSTANCE_ID} ${CONTROLLER_2_INSTANCE_ID}
```

View File

@ -24,23 +24,12 @@ sudo mv kubectl /usr/local/bin
In this section you will configure the kubectl client to point to the [Kubernetes API Server Frontend Load Balancer](04-kubernetes-controller.md#setup-kubernetes-api-server-frontend-load-balancer). In this section you will configure the kubectl client to point to the [Kubernetes API Server Frontend Load Balancer](04-kubernetes-controller.md#setup-kubernetes-api-server-frontend-load-balancer).
### GCE
``` ```
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region us-central1 \ --region us-central1 \
--format 'value(address)') --format 'value(address)')
``` ```
### AWS
```
KUBERNETES_PUBLIC_ADDRESS=$(aws elb describe-load-balancers \
--load-balancer-name kubernetes | \
jq -r '.LoadBalancerDescriptions[].DNSName')
```
---
Also be sure to locate the CA certificate [created earlier](02-certificate-authority.md). Since we are using self-signed TLS certs we need to trust the CA certificate so we can verify the remote API Servers. Also be sure to locate the CA certificate [created earlier](02-certificate-authority.md). Since we are using self-signed TLS certs we need to trust the CA certificate so we can verify the remote API Servers.
### Build up the kubeconfig entry ### Build up the kubeconfig entry

View File

@ -40,8 +40,6 @@ Output:
## Create Routes ## Create Routes
### GCP
``` ```
gcloud compute routes create kubernetes-route-10-200-0-0-24 \ gcloud compute routes create kubernetes-route-10-200-0-0-24 \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
@ -61,51 +59,4 @@ gcloud compute routes create kubernetes-route-10-200-2-0-24 \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
--next-hop-address 10.240.0.22 \ --next-hop-address 10.240.0.22 \
--destination-range 10.200.2.0/24 --destination-range 10.200.2.0/24
``` ```
### AWS
```
ROUTE_TABLE_ID=$(aws ec2 describe-route-tables \
--filters "Name=tag:Name,Values=kubernetes" | \
jq -r '.RouteTables[].RouteTableId')
```
```
WORKER_0_INSTANCE_ID=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=worker0" | \
jq -j '.Reservations[].Instances[].InstanceId')
```
```
aws ec2 create-route \
--route-table-id ${ROUTE_TABLE_ID} \
--destination-cidr-block 10.200.0.0/24 \
--instance-id ${WORKER_0_INSTANCE_ID}
```
```
WORKER_1_INSTANCE_ID=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=worker1" | \
jq -j '.Reservations[].Instances[].InstanceId')
```
```
aws ec2 create-route \
--route-table-id ${ROUTE_TABLE_ID} \
--destination-cidr-block 10.200.1.0/24 \
--instance-id ${WORKER_1_INSTANCE_ID}
```
```
WORKER_2_INSTANCE_ID=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=worker2" | \
jq -j '.Reservations[].Instances[].InstanceId')
```
```
aws ec2 create-route \
--route-table-id ${ROUTE_TABLE_ID} \
--destination-cidr-block 10.200.2.0/24 \
--instance-id ${WORKER_2_INSTANCE_ID}
```

View File

@ -40,8 +40,6 @@ NODE_PORT=$(kubectl get svc nginx --output=jsonpath='{range .spec.ports[0]}{.nod
### Create the Node Port Firewall Rule ### Create the Node Port Firewall Rule
#### GCP
``` ```
gcloud compute firewall-rules create kubernetes-nginx-service \ gcloud compute firewall-rules create kubernetes-nginx-service \
--allow=tcp:${NODE_PORT} \ --allow=tcp:${NODE_PORT} \
@ -55,32 +53,6 @@ NODE_PUBLIC_IP=$(gcloud compute instances describe worker0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)') --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
``` ```
#### AWS
```
SECURITY_GROUP_ID=$(aws ec2 describe-security-groups \
--filters "Name=tag:Name,Values=kubernetes" | \
jq -r '.SecurityGroups[].GroupId')
```
```
aws ec2 authorize-security-group-ingress \
--group-id ${SECURITY_GROUP_ID} \
--protocol tcp \
--port ${NODE_PORT} \
--cidr 0.0.0.0/0
```
Grab the `EXTERNAL_IP` for one of the worker nodes:
```
NODE_PUBLIC_IP=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=worker0" | \
jq -j '.Reservations[].Instances[].PublicIpAddress')
```
---
Test the nginx service using cURL: Test the nginx service using cURL:
``` ```

View File

@ -1,8 +1,6 @@
# Cleaning Up # Cleaning Up
## GCP ## Virtual Machines
### Virtual Machines
``` ```
gcloud -q compute instances delete \ gcloud -q compute instances delete \
@ -10,7 +8,7 @@ gcloud -q compute instances delete \
worker0 worker1 worker2 worker0 worker1 worker2
``` ```
### Networking ## Networking
``` ```
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule --region us-central1 gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule --region us-central1
@ -50,155 +48,4 @@ gcloud -q compute networks subnets delete kubernetes
``` ```
gcloud -q compute networks delete kubernetes-the-hard-way gcloud -q compute networks delete kubernetes-the-hard-way
``` ```
## AWS
### Virtual Machines
```
KUBERNETES_HOSTS=(controller0 controller1 controller2 worker0 worker1 worker2)
```
```
for host in ${KUBERNETES_HOSTS[*]}; do
INSTANCE_ID=$(aws ec2 describe-instances \
--filters "Name=tag:Name,Values=${host}" | \
jq -j '.Reservations[].Instances[].InstanceId')
aws ec2 terminate-instances --instance-ids ${INSTANCE_ID}
done
```
### IAM
```
aws iam remove-role-from-instance-profile \
--instance-profile-name kubernetes \
--role-name kubernetes
```
```
aws iam delete-instance-profile \
--instance-profile-name kubernetes
```
```
aws iam delete-role-policy \
--role-name kubernetes \
--policy-name kubernetes
```
```
aws iam delete-role --role-name kubernetes
```
### SSH Keys
```
aws ec2 delete-key-pair --key-name kubernetes
```
### Networking
Be sure to wait about a minute for all VMs to terminates to avoid the following errors:
```
An error occurred (DependencyViolation) when calling ...
```
Network resources cannot be deleted while VMs hold a reference to them.
#### Load Balancers
```
aws elb delete-load-balancer \
--load-balancer-name kubernetes
```
#### Internet Gateways
```
VPC_ID=$(aws ec2 describe-vpcs \
--filters "Name=tag:Name,Values=kubernetes" | \
jq -r '.Vpcs[].VpcId')
```
```
INTERNET_GATEWAY_ID=$(aws ec2 describe-internet-gateways \
--filters "Name=tag:Name,Values=kubernetes" | \
jq -r '.InternetGateways[].InternetGatewayId')
```
```
aws ec2 detach-internet-gateway \
--internet-gateway-id ${INTERNET_GATEWAY_ID} \
--vpc-id ${VPC_ID}
```
```
aws ec2 delete-internet-gateway \
--internet-gateway-id ${INTERNET_GATEWAY_ID}
```
#### Security Groups
```
SECURITY_GROUP_ID=$(aws ec2 describe-security-groups \
--filters "Name=tag:Name,Values=kubernetes" | \
jq -r '.SecurityGroups[].GroupId')
```
```
aws ec2 delete-security-group \
--group-id ${SECURITY_GROUP_ID}
```
#### Subnets
```
SUBNET_ID=$(aws ec2 describe-subnets \
--filters "Name=tag:Name,Values=kubernetes" | \
jq -r '.Subnets[].SubnetId')
```
```
aws ec2 delete-subnet --subnet-id ${SUBNET_ID}
```
#### Route Tables
```
ROUTE_TABLE_ID=$(aws ec2 describe-route-tables \
--filters "Name=tag:Name,Values=kubernetes" | \
jq -r '.RouteTables[].RouteTableId')
```
```
aws ec2 delete-route-table --route-table-id ${ROUTE_TABLE_ID}
```
#### VPC
```
VPC_ID=$(aws ec2 describe-vpcs \
--filters "Name=tag:Name,Values=kubernetes" | \
jq -r '.Vpcs[].VpcId')
```
```
aws ec2 delete-vpc --vpc-id ${VPC_ID}
```
#### DHCP Option Sets
```
DHCP_OPTION_SET_ID=$(aws ec2 describe-dhcp-options \
--filters "Name=tag:Name,Values=kubernetes" | \
jq -r '.DhcpOptions[].DhcpOptionsId')
```
```
aws ec2 delete-dhcp-options \
--dhcp-options-id ${DHCP_OPTION_SET_ID}
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 233 KiB

View File

@ -1,3 +0,0 @@
chAng3m3,admin,admin
chAng3m3,scheduler,scheduler
chAng3m3,kubelet,kubelet
1 chAng3m3 admin admin
2 chAng3m3 scheduler scheduler
3 chAng3m3 kubelet kubelet