diff --git a/README.md b/README.md index ca6ad80..e208210 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,7 @@ Kubernetes The Hard Way guides you through bootstrapping a highly available Kube ## Labs -This tutorial assumes you have access to the [Google Cloud Platform](https://cloud.google.com). While GCP is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms. +This tutorial assumes you have access to the [Google Cloud Platform](https://cloud.google.com/), or [Amazon Web Services](https://aws.amazon.com/). While GCP/AWS is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms. * [Prerequisites](docs/01-prerequisites.md) * [Installing the Client Tools](docs/02-client-tools.md) diff --git a/docs/01-prerequisites.md b/docs/01-prerequisites.md index 3e1a4b5..ea57151 100644 --- a/docs/01-prerequisites.md +++ b/docs/01-prerequisites.md @@ -1,5 +1,10 @@ # Prerequisites +This tutorial uses Google Cloud Platform (`GCP`) to provision instrastructure required by the Kubernetes cluster, however code is also provided for users who prefer to use Amazon Web Services, in expandable sections marked `AWS`. + +
+GCP + ## Google Cloud Platform This tutorial leverages the [Google Cloud Platform](https://cloud.google.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://cloud.google.com/free/) for $300 in free credits. @@ -43,6 +48,44 @@ gcloud config set compute/zone us-west1-c ``` > Use the `gcloud compute zones list` command to view additional regions and zones. +
+ +
+AWS + +## Amazon Web Services + +This tutorial leverages the [Amazon Web Services](https://aws.amazon.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. [Sign up](https://portal.aws.amazon.com/billing/signup) for [12 months of free services](https://aws.amazon.com/free/). + +> The compute resources required for this tutorial exceed the Amazon Web Services free tier. + +## Amazon Web Services CLI + +### Install the Amazon Web Services CLI + +Follow the Amazon Web Services CLI [documentation](https://aws.amazon.com/cli/) to install and configure the `aws` command line utility. + +### Configure the kubernetes-the-hard-way profile + +Throughout this tutorial, an aws profile named `kubernetes-the-hard-way` will be used. + +Create the profile and set its default region (us-west-2 in this example): + +``` +aws configure set region us-west-2 \ + --profile kubernetes-the-hard-way +``` + +Set the credentials for the profile to the same set as in the default profile: + +``` +aws configure set aws_access_key_id "$(aws configure get aws_access_key_id)" \ + --profile kubernetes-the-hard-way + +aws configure set aws_secret_access_key "$(aws configure get aws_secret_access_key)" \ + --profile kubernetes-the-hard-way +``` +
## Running Commands in Parallel with tmux diff --git a/docs/03-compute-resources.md b/docs/03-compute-resources.md index bd92c3c..1d4bb1b 100644 --- a/docs/03-compute-resources.md +++ b/docs/03-compute-resources.md @@ -12,16 +12,52 @@ The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-ad ### Virtual Private Cloud Network + In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/compute/docs/networks-and-firewalls#networks) (VPC) network will be setup to host the Kubernetes cluster. Create the `kubernetes-the-hard-way` custom VPC network: +
+GCP + ``` gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom ``` +
+ +
+AWS + +``` +VPC_ID="$(aws ec2 create-vpc \ + --cidr-block 10.240.0.0/24 \ + --profile kubernetes-the-hard-way \ + --query Vpc.VpcId \ + --output text)" +``` +``` +for opt in support hostnames; do + aws ec2 modify-vpc-attribute \ + --vpc-id "$VPC_ID" \ + --enable-dns-"$opt" '{"Value": true}' \ + --profile kubernetes-the-hard-way +done + +aws ec2 create-tags \ + --resources "$VPC_ID" \ + --tags Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared \ + --profile kubernetes-the-hard-way +``` + +
+

+ A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets) must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster. +
+GCP + Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: ``` @@ -30,12 +66,92 @@ gcloud compute networks subnets create kubernetes \ --range 10.240.0.0/24 ``` +
+ +
+AWS + +``` +DHCP_OPTIONS_ID="$(aws ec2 create-dhcp-options \ + --dhcp-configuration \ + "Key=domain-name,Values=$(aws configure get region --profile kubernetes-the-hard-way).compute.internal" \ + "Key=domain-name-servers,Values=AmazonProvidedDNS" \ + --profile kubernetes-the-hard-way \ + --query DhcpOptions.DhcpOptionsId \ + --output text)" + +aws ec2 create-tags \ + --resources "$DHCP_OPTIONS_ID" \ + --tags Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared \ + --profile kubernetes-the-hard-way + +aws ec2 associate-dhcp-options \ + --dhcp-options-id "$DHCP_OPTIONS_ID" \ + --vpc-id "$VPC_ID" \ + --profile kubernetes-the-hard-way + +SUBNET_ID="$(aws ec2 create-subnet \ + --vpc-id "$VPC_ID" \ + --cidr-block 10.240.0.0/24 \ + --profile kubernetes-the-hard-way \ + --query Subnet.SubnetId \ + --output text)" + +aws ec2 create-tags \ + --resources "$SUBNET_ID" \ + --tags Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared \ + --profile kubernetes-the-hard-way + +INTERNET_GATEWAY_ID="$(aws ec2 create-internet-gateway \ + --profile kubernetes-the-hard-way \ + --query InternetGateway.InternetGatewayId \ + --output text)" + +aws ec2 create-tags \ + --resources "$INTERNET_GATEWAY_ID" \ + --tags Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared \ + --profile kubernetes-the-hard-way + +aws ec2 attach-internet-gateway \ + --internet-gateway-id "$INTERNET_GATEWAY_ID" \ + --vpc-id "$VPC_ID" \ + --profile kubernetes-the-hard-way + +ROUTE_TABLE_ID="$(aws ec2 create-route-table \ + --vpc-id "$VPC_ID" \ + --profile kubernetes-the-hard-way \ + --query RouteTable.RouteTableId \ + --output text)" + +aws ec2 create-tags \ + --resources "$ROUTE_TABLE_ID" \ + --tags Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared \ + --profile kubernetes-the-hard-way + +aws ec2 associate-route-table \ + --route-table-id "$ROUTE_TABLE_ID" \ + --subnet-id "$SUBNET_ID" \ + --profile kubernetes-the-hard-way + +aws ec2 create-route \ + --route-table-id "$ROUTE_TABLE_ID" \ + --destination-cidr-block 0.0.0.0/0 \ + --gateway-id "$INTERNET_GATEWAY_ID" \ + --profile kubernetes-the-hard-way +``` + +
+

+ > The `10.240.0.0/24` IP address range can host up to 254 compute instances. ### Firewall Rules Create a firewall rule that allows internal communication across all protocols: +
+GCP + ``` gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --allow tcp,udp,icmp \ @@ -43,8 +159,48 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --source-ranges 10.240.0.0/24,10.200.0.0/16 ``` +
+ +
+AWS + +``` +SECURITY_GROUP_ID="$(aws ec2 create-security-group \ + --group-name kubernetes-the-hard-way \ + --description kubernetes-the-hard-way \ + --vpc-id "$VPC_ID" \ + --profile kubernetes-the-hard-way \ + --query GroupId \ + --output text)" + +aws ec2 create-tags \ + --resources "$SECURITY_GROUP_ID" \ + --tags Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared \ + --profile kubernetes-the-hard-way +``` +``` +allow() { + aws ec2 authorize-security-group-ingress \ + --profile kubernetes-the-hard-way \ + --group-id "$SECURITY_GROUP_ID" \ + "$@" +} + +allow --protocol all --source-group "$SECURITY_GROUP_ID" + +for network in 10.200.0.0/16 10.240.0.0/24; do + allow --protocol all --cidr "$network" +done +``` + +
+

+ Create a firewall rule that allows external SSH, ICMP, and HTTPS: +
+GCP + ``` gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ --allow tcp:22,tcp:6443,icmp \ @@ -54,8 +210,29 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ > An [external load balancer](https://cloud.google.com/compute/docs/load-balancing/network/) will be used to expose the Kubernetes API Servers to remote clients. +
+ +
+AWS + +``` +allow --protocol icmp --port 3-4 --cidr 0.0.0.0/0 + +for port in 22 6443; do + allow --protocol tcp --port "$port" --cidr 0.0.0.0/0 +done +``` + +> An [external load balancer](https://aws.amazon.com/elasticloadbalancing/) will be used to expose the Kubernetes API Servers to remote clients. + +
+

+ List the firewall rules in the `kubernetes-the-hard-way` VPC network: +
+GCP + ``` gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" ``` @@ -68,8 +245,77 @@ kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp ``` +
+ +
+AWS + +``` +aws ec2 describe-security-groups \ + --filters Name=group-id,Values="$SECURITY_GROUP_ID" \ + --profile kubernetes-the-hard-way \ + --query 'SecurityGroups[0].IpPermissions[].{GroupIds:UserIdGroupPairs[].GroupId,FromPort:FromPort,ToPort:ToPort,IpProtocol:IpProtocol,CidrIps:IpRanges[].CidrIp}' \ + --output table|\ + sed 's/| *DescribeSecurityGroups *|//g'|\ + tail -n +3 +``` + +> output + +``` ++----------+--------------+----------+ +| FromPort | IpProtocol | ToPort | ++----------+--------------+----------+ +| None | -1 | None | ++----------+--------------+----------+ +|| CidrIps || +|+----------------------------------+| +|| 10.200.0.0/16 || +|| 10.240.0.0/24 || +|+----------------------------------+| +|| GroupIds || +|+----------------------------------+| +|| sg-b33811c3 || +|+----------------------------------+| + ++----------+--------------+----------+ +| FromPort | IpProtocol | ToPort | ++----------+--------------+----------+ +| 22 | tcp | 22 | ++----------+--------------+----------+ +|| CidrIps || +|+----------------------------------+| +|| 0.0.0.0/0 || +|+----------------------------------+| + ++----------+--------------+----------+ +| FromPort | IpProtocol | ToPort | ++----------+--------------+----------+ +| 6443 | tcp | 6443 | ++----------+--------------+----------+ +|| CidrIps || +|+----------------------------------+| +|| 0.0.0.0/0 || +|+----------------------------------+| + ++----------+--------------+----------+ +| FromPort | IpProtocol | ToPort | ++----------+--------------+----------+ +| 3 | icmp | 4 | ++----------+--------------+----------+ +|| CidrIps || +|+----------------------------------+| +|| 0.0.0.0/0 || +|+----------------------------------+| +``` + +
+ ### Kubernetes Public IP Address +
+GCP + Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers: ``` @@ -90,14 +336,120 @@ NAME REGION ADDRESS STATUS kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED ``` +
+ +
+AWS + +``` +aws elb create-load-balancer \ + --load-balancer-name kubernetes-the-hard-way \ + --listeners Protocol=TCP,LoadBalancerPort=6443,InstanceProtocol=TCP,InstancePort=6443 \ + --subnets "$SUBNET_ID" \ + --security-groups "$SECURITY_GROUP_ID" \ + --profile kubernetes-the-hard-way +``` + +> output + +``` +{ + "DNSName": "kubernetes-the-hard-way-382204365.us-west-2.elb.amazonaws.com" +} +``` + +
+ ## Compute Instances The compute instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 18.04, which has good support for the [containerd container runtime](https://github.com/containerd/containerd). Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process. +
+AWS + +### Create Instance IAM Policies + +``` +cat >kubernetes-iam-role.json <kubernetes-iam-policy.json < + ### Kubernetes Controllers Create three compute instances which will host the Kubernetes control plane: +
+GCP + ``` for i in 0 1 2; do gcloud compute instances create controller-${i} \ @@ -114,6 +466,47 @@ for i in 0 1 2; do done ``` +
+ +
+AWS + +``` +# For ssh access to ec2 machines. +aws ec2 create-key-pair \ + --key-name kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query KeyMaterial \ + --output text >~/.ssh/kubernetes-the-hard-way + +chmod 600 ~/.ssh/kubernetes-the-hard-way + +for i in 0 1 2; do + instance_id="$(aws ec2 run-instances \ + --associate-public-ip-address \ + --iam-instance-profile Name=kubernetes-the-hard-way \ + --image-id "$IMAGE_ID" \ + --count 1 \ + --key-name kubernetes-the-hard-way \ + --security-group-ids "$SECURITY_GROUP_ID" \ + --instance-type t2.small \ + --private-ip-address "10.240.0.1$i" \ + --subnet-id "$SUBNET_ID" \ + --user-data "name=controller-$i" \ + --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=controller-$i},{Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared}]" \ + --profile kubernetes-the-hard-way \ + --query 'Instances[].InstanceId' \ + --output text)" + + aws ec2 modify-instance-attribute \ + --instance-id "$instance_id" \ + --no-source-dest-check \ + --profile kubernetes-the-hard-way +done +``` + +
+ ### Kubernetes Workers Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. The `pod-cidr` instance metadata will be used to expose pod subnet allocations to compute instances at runtime. @@ -122,6 +515,9 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste Create three compute instances which will host the Kubernetes worker nodes: +
+GCP + ``` for i in 0 1 2; do gcloud compute instances create worker-${i} \ @@ -139,10 +535,45 @@ for i in 0 1 2; do done ``` +
+ +
+AWS + +``` +for i in 0 1 2; do + instance_id="$(aws ec2 run-instances \ + --associate-public-ip-address \ + --iam-instance-profile Name=kubernetes-the-hard-way \ + --image-id "$IMAGE_ID" \ + --count 1 \ + --key-name kubernetes-the-hard-way \ + --security-group-ids "$SECURITY_GROUP_ID" \ + --instance-type t2.small \ + --private-ip-address "10.240.0.2$i" \ + --subnet-id "$SUBNET_ID" \ + --user-data "name=worker-$i|pod-cidr=10.200.$i.0/24" \ + --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=worker-$i},{Key=kubernetes.io/cluster/kubernetes-the-hard-way,Value=shared}]" \ + --profile kubernetes-the-hard-way \ + --query 'Instances[].InstanceId' \ + --output text)" + + aws ec2 modify-instance-attribute \ + --instance-id "$instance_id" \ + --no-source-dest-check \ + --profile kubernetes-the-hard-way +done +``` + +
+ ### Verification List the compute instances in your default compute zone: +
+GCP + ``` gcloud compute instances list ``` @@ -159,8 +590,43 @@ worker-1 us-west1-c n1-standard-1 10.240.0.21 XX.XXX.XX.XXX worker-2 us-west1-c n1-standard-1 10.240.0.22 XXX.XXX.XX.XX RUNNING ``` +
+ +
+AWS + +``` +aws ec2 describe-instances \ + --filters \ + Name=instance-state-name,Values=running \ + Name=vpc-id,Values="$VPC_ID" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[].Instances[]|sort_by(@, &Tags[?Key==`Name`]|[0].Value)[].[Tags[?Key==`Name`]|[0].Value,InstanceId,Placement.AvailabilityZone,PrivateIpAddress,PublicIpAddress]' \ + --output table +``` + +> output + +``` +---------------------------------------------------------------------------------------- +| DescribeInstances | ++--------------+-----------------------+-------------+--------------+------------------+ +| controller-0| i-07c33497b7e6ee5ce | us-west-2a | 10.240.0.10 | 34.216.239.194 | +| controller-1| i-099ffe8ec525f6bdb | us-west-2a | 10.240.0.11 | 54.186.157.115 | +| controller-2| i-00c1800423320d12f | us-west-2a | 10.240.0.12 | 52.12.162.200 | +| worker-0 | i-00020c75b6703aa99 | us-west-2a | 10.240.0.20 | 54.212.17.18 | +| worker-1 | i-0bf4c8f9f36012d0e | us-west-2a | 10.240.0.21 | 34.220.143.249 | +| worker-2 | i-0b4d2dd686ddd1e1a | us-west-2a | 10.240.0.22 | 35.165.251.149 | ++--------------+-----------------------+-------------+--------------+------------------+ +``` + +
+ ## Configuring SSH Access +
+GCP + SSH will be used to configure the controller and worker instances. When connecting to compute instances for the first time SSH keys will be generated for you and stored in the project or instance metadata as describe in the [connecting to instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance) documentation. Test SSH access to the `controller-0` compute instances: @@ -227,4 +693,27 @@ logout Connection to XX.XXX.XXX.XXX closed ``` +
+ +
+AWS + +``` +get_ip() { + aws ec2 describe-instances \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag:Name,Values="$1" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[0].Instances[0].PublicIpAddress' \ + --output text +} +``` +``` +ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip controller-0)" +``` + +
+

+ Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md) diff --git a/docs/04-certificate-authority.md b/docs/04-certificate-authority.md index f8842d9..78dde19 100644 --- a/docs/04-certificate-authority.md +++ b/docs/04-certificate-authority.md @@ -111,6 +111,9 @@ Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/doc Generate a certificate and private key for each Kubernetes worker node: +
+GCP + ``` for instance in worker-0 worker-1 worker-2; do cat > ${instance}-csr.json < + +
+AWS + +``` +VPC_ID="$(aws ec2 describe-vpcs \ + --filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'Vpcs[0].VpcId' \ + --output text)" +``` +``` +for i in 0 1 2; do + instance="worker-$i" + hostname="ip-10-240-0-2$i" + + cut -c3- >"$instance-csr.json" < +

+ Results: ``` @@ -296,6 +358,9 @@ The `kubernetes-the-hard-way` static IP address will be included in the list of Generate the Kubernetes API Server certificate and private key: +
+GCP + ``` { @@ -333,6 +398,49 @@ cfssl gencert \ } ``` +
+ +
+AWS + +``` +KUBERNETES_PUBLIC_ADDRESS="$(aws elb describe-load-balancers \ + --load-balancer-name kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'LoadBalancerDescriptions[0].DNSName' \ + --output text)" + +cat >kubernetes-csr.json < +

+ Results: ``` @@ -390,14 +498,46 @@ service-account.pem Copy the appropriate certificates and private keys to each worker instance: +
+GCP + ``` for instance in worker-0 worker-1 worker-2; do gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/ done ``` +
+ +
+AWS + +``` +get_ip() { + aws ec2 describe-instances \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag:Name,Values="$1" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[0].Instances[0].PublicIpAddress' \ + --output text +} +``` +``` +for instance in worker-0 worker-1 worker-2; do + scp -i ~/.ssh/kubernetes-the-hard-way -o StrictHostKeyChecking=no \ + ca.pem "$instance-key.pem" "$instance.pem" "ubuntu@$(get_ip "$instance"):~/" +done +``` + +
+

+ Copy the appropriate certificates and private keys to each controller instance: +
+GCP + ``` for instance in controller-0 controller-1 controller-2; do gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ @@ -405,6 +545,22 @@ for instance in controller-0 controller-1 controller-2; do done ``` +
+ +
+AWS + +``` +for instance in controller-0 controller-1 controller-2; do + scp -i ~/.ssh/kubernetes-the-hard-way -o StrictHostKeyChecking=no \ + ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem service-account-key.pem service-account.pem \ + "ubuntu@$(get_ip "$instance"):~/" +done +``` + +
+

+ > The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab. Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md) diff --git a/docs/05-kubernetes-configuration-files.md b/docs/05-kubernetes-configuration-files.md index e8ddf9d..ee248ba 100644 --- a/docs/05-kubernetes-configuration-files.md +++ b/docs/05-kubernetes-configuration-files.md @@ -12,18 +12,39 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high Retrieve the `kubernetes-the-hard-way` static IP address: +
+GCP + ``` KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') ``` +
+ +
+AWS + +``` +KUBERNETES_PUBLIC_ADDRESS="$(aws elb describe-load-balancers \ + --load-balancer-name kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'LoadBalancerDescriptions[0].DNSName' \ + --output text)" +``` + +
+ ### The kubelet Kubernetes Configuration File When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/). Generate a kubeconfig file for each worker node: +
+GCP + ``` for instance in worker-0 worker-1 worker-2; do kubectl config set-cluster kubernetes-the-hard-way \ @@ -47,6 +68,41 @@ for instance in worker-0 worker-1 worker-2; do done ``` +
+ +
+AWS + +``` +for i in 0 1 2; do + instance="worker-$i" + hostname="ip-10-240-0-2$i" + + kubectl config set-cluster kubernetes-the-hard-way \ + --certificate-authority=ca.pem \ + --embed-certs=true \ + --server="https://$KUBERNETES_PUBLIC_ADDRESS:6443" \ + --kubeconfig="$instance.kubeconfig" + + kubectl config set-credentials "system:node:$hostname" \ + --client-certificate="$instance.pem" \ + --client-key="$instance-key.pem" \ + --embed-certs=true \ + --kubeconfig="$instance.kubeconfig" + + kubectl config set-context default \ + --cluster=kubernetes-the-hard-way \ + --user="system:node:$hostname" \ + --kubeconfig="$instance.kubeconfig" + + kubectl config use-context default \ + --kubeconfig="$instance.kubeconfig" +done +``` + +
+

+ Results: ``` @@ -195,18 +251,72 @@ admin.kubeconfig Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance: +
+GCP + ``` for instance in worker-0 worker-1 worker-2; do gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ done ``` +
+ +
+AWS + +``` +VPC_ID="$(aws ec2 describe-vpcs \ + --filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'Vpcs[0].VpcId' \ + --output text)" + +get_ip() { + aws ec2 describe-instances \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag:Name,Values="$1" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[0].Instances[0].PublicIpAddress' \ + --output text +} +``` +``` +for instance in worker-0 worker-1 worker-2; do + scp -i ~/.ssh/kubernetes-the-hard-way -o StrictHostKeyChecking=no \ + "$instance.kubeconfig" kube-proxy.kubeconfig "ubuntu@$(get_ip "$instance"):~/" +done +``` + +
+

+ Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance: +
+GCP + ``` for instance in controller-0 controller-1 controller-2; do gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/ done ``` +
+ +
+AWS + +``` +for instance in controller-0 controller-1 controller-2; do + scp -i ~/.ssh/kubernetes-the-hard-way -o StrictHostKeyChecking=no \ + admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig \ + "ubuntu@$(get_ip "$instance"):~/" +done +``` + +
+

+ Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md) diff --git a/docs/06-data-encryption-keys.md b/docs/06-data-encryption-keys.md index 233bce2..08b481b 100644 --- a/docs/06-data-encryption-keys.md +++ b/docs/06-data-encryption-keys.md @@ -34,10 +34,45 @@ EOF Copy the `encryption-config.yaml` encryption config file to each controller instance: +
+GCP + ``` for instance in controller-0 controller-1 controller-2; do gcloud compute scp encryption-config.yaml ${instance}:~/ done ``` +
+ +
+AWS + +``` +VPC_ID="$(aws ec2 describe-vpcs \ + --filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'Vpcs[0].VpcId' \ + --output text)" + +get_ip() { + aws ec2 describe-instances \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag:Name,Values="$1" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[0].Instances[0].PublicIpAddress' \ + --output text +} +``` +``` +for instance in controller-0 controller-1 controller-2; do + scp -i ~/.ssh/kubernetes-the-hard-way -o StrictHostKeyChecking=no \ + encryption-config.yaml "ubuntu@$(get_ip "$instance"):~/" +done +``` + +
+

+ Next: [Bootstrapping the etcd Cluster](07-bootstrapping-etcd.md) diff --git a/docs/07-bootstrapping-etcd.md b/docs/07-bootstrapping-etcd.md index d4be370..9a22764 100644 --- a/docs/07-bootstrapping-etcd.md +++ b/docs/07-bootstrapping-etcd.md @@ -6,10 +6,41 @@ Kubernetes components are stateless and store cluster state in [etcd](https://gi The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: +
+GCP + ``` gcloud compute ssh controller-0 ``` +
+ +
+AWS + +``` +VPC_ID="$(aws ec2 describe-vpcs \ + --filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'Vpcs[0].VpcId' \ + --output text)" + +get_ip() { + aws ec2 describe-instances \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag:Name,Values="$1" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[0].Instances[0].PublicIpAddress' \ + --output text +} +``` +``` +ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip controller-0)" +``` + +
+ ### Running commands in parallel with tmux [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab. @@ -45,17 +76,47 @@ Extract and install the `etcd` server and the `etcdctl` command line utility: The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance: +
+GCP + ``` INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) ``` +
+ +
+AWS + +``` +INTERNAL_IP="$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)" +``` + +
+

+ Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance: +
+GCP + ``` ETCD_NAME=$(hostname -s) ``` +
+ +
+AWS + +``` +ETCD_NAME="$(curl -s http://169.254.169.254/latest/user-data/|tr '|' '\n'|grep '^name='|cut -d= -f2)" +``` + +
+

+ Create the `etcd.service` systemd unit file: ``` diff --git a/docs/08-bootstrapping-kubernetes-controllers.md b/docs/08-bootstrapping-kubernetes-controllers.md index a0ae93c..60251ef 100644 --- a/docs/08-bootstrapping-kubernetes-controllers.md +++ b/docs/08-bootstrapping-kubernetes-controllers.md @@ -6,10 +6,41 @@ In this lab you will bootstrap the Kubernetes control plane across three compute The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: +
+GCP + ``` gcloud compute ssh controller-0 ``` +
+ +
+AWS + +``` +VPC_ID="$(aws ec2 describe-vpcs \ + --filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'Vpcs[0].VpcId' \ + --output text)" + +get_ip() { + aws ec2 describe-instances \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag:Name,Values="$1" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[0].Instances[0].PublicIpAddress' \ + --output text +} +``` +``` +ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip controller-0)" +``` + +
+ ### Running commands in parallel with tmux [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab. @@ -57,11 +88,26 @@ Install the Kubernetes binaries: The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance: +
+GCP + ``` INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) ``` +
+ +
+AWS + +``` +INTERNAL_IP="$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)" +``` + +
+

+ Create the `kube-apiserver.service` systemd unit file: ``` @@ -119,6 +165,9 @@ sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/ Create the `kube-controller-manager.service` systemd unit file: +
+GCP + ``` cat < + +
+AWS + +``` +cat < + ### Configure the Kubernetes Scheduler Move the `kube-scheduler` kubeconfig into place: @@ -202,6 +286,9 @@ EOF ### Enable HTTP Health Checks +
+GCP + A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-balancing/network) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. The network load balancer only supports HTTP health checks which means the HTTPS endpoint exposed by the API server cannot be used. As a workaround the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`. > The `/healthz` API server endpoint does not require authentication by default. @@ -243,6 +330,8 @@ sudo systemctl restart nginx sudo systemctl enable nginx ``` +
+ ### Verification ``` @@ -260,10 +349,13 @@ etcd-1 Healthy {"health": "true"} Test the nginx HTTP health check proxy: +
+GCP + ``` curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz ``` - +> output ``` HTTP/1.1 200 OK Server: nginx/1.14.0 (Ubuntu) @@ -275,6 +367,30 @@ Connection: keep-alive ok ``` +
+ +
+AWS + +``` +curl -i \ + --cacert /var/lib/kubernetes/ca.pem \ + -H "Host: kubernetes.default.svc.cluster.local" \ + https://127.0.0.1:6443/healthz +``` +> output +``` +HTTP/2 200 +content-type: text/plain; charset=utf-8 +content-length: 2 +date: Tue, 31 Jul 2018 15:47:02 GMT + +ok +``` + +
+

+ > Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`. ## RBAC for Kubelet Authorization @@ -283,10 +399,25 @@ In this section you will configure RBAC permissions to allow the Kubernetes API > This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization. +
+GCP + ``` gcloud compute ssh controller-0 ``` +
+ +
+AWS + +``` +ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip controller-0)" +``` + +
+

+ Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: ``` @@ -346,6 +477,9 @@ In this section you will provision an external load balancer to front the Kubern Create the external load balancer network resources: +
+GCP + ``` { KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ @@ -376,16 +510,62 @@ Create the external load balancer network resources: } ``` +
+ +
+AWS + +``` +get_instance_id() { + aws ec2 describe-instances \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag:Name,Values="$1" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[0].Instances[0].InstanceId' \ + --output text +} + +aws elb register-instances-with-load-balancer \ + --load-balancer-name kubernetes-the-hard-way \ + --instances \ + "$(get_instance_id controller-0)" \ + "$(get_instance_id controller-1)" \ + "$(get_instance_id controller-2)" \ + --profile kubernetes-the-hard-way +``` + +
+ ### Verification Retrieve the `kubernetes-the-hard-way` static IP address: +
+GCP + ``` KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) \ --format 'value(address)') ``` +
+ +
+AWS + +``` +KUBERNETES_PUBLIC_ADDRESS="$(aws elb describe-load-balancers \ + --load-balancer-name kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'LoadBalancerDescriptions[0].DNSName' \ + --output text)" +``` + +
+

+ Make a HTTP request for the Kubernetes version info: ``` diff --git a/docs/09-bootstrapping-kubernetes-workers.md b/docs/09-bootstrapping-kubernetes-workers.md index a3a50da..5d1eb27 100644 --- a/docs/09-bootstrapping-kubernetes-workers.md +++ b/docs/09-bootstrapping-kubernetes-workers.md @@ -6,10 +6,41 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: +
+GCP + ``` gcloud compute ssh worker-0 ``` +
+ +
+AWS + +``` +VPC_ID="$(aws ec2 describe-vpcs \ + --filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'Vpcs[0].VpcId' \ + --output text)" + +get_ip() { + aws ec2 describe-instances \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag:Name,Values="$1" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[0].Instances[0].PublicIpAddress' \ + --output text +} +``` +``` +ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip worker-0)" +``` + +
+ ### Running commands in parallel with tmux [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab. @@ -70,11 +101,26 @@ Install the worker binaries: Retrieve the Pod CIDR range for the current compute instance: +
+GCP + ``` POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) ``` +
+ +
+AWS + +``` +POD_CIDR="$(curl -s http://169.254.169.254/latest/user-data/|tr '|' '\n'|grep '^pod-cidr='|cut -d= -f2)" +``` + +
+

+ Create the `bridge` network configuration file: ``` @@ -162,6 +208,9 @@ EOF ### Configure the Kubelet +
+GCP + ``` { sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/ @@ -170,8 +219,26 @@ EOF } ``` +
+ +
+AWS + +``` +WORKER_NAME="$(curl -s http://169.254.169.254/latest/user-data/|tr '|' '\n'|grep '^name='|cut -d= -f2)" +sudo mv "$WORKER_NAME-key.pem" "$WORKER_NAME.pem" /var/lib/kubelet/ +sudo mv "$WORKER_NAME.kubeconfig" /var/lib/kubelet/kubeconfig +sudo mv ca.pem /var/lib/kubernetes/ +``` + +
+

+ Create the `kubelet-config.yaml` configuration file: +
+GCP + ``` cat < + +
+AWS + +``` +cat < +

+ Create the `kubelet.service` systemd unit file: ``` @@ -279,11 +377,27 @@ EOF List the registered Kubernetes nodes: +
+GCP + ``` gcloud compute ssh controller-0 \ --command "kubectl get nodes --kubeconfig admin.kubeconfig" ``` +
+ +
+AWS + +``` +ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip controller-0)" \ + "kubectl get nodes --kubeconfig admin.kubeconfig" +``` + +
+

+ > output ``` diff --git a/docs/10-configuring-kubectl.md b/docs/10-configuring-kubectl.md index e524c46..df60e5c 100644 --- a/docs/10-configuring-kubectl.md +++ b/docs/10-configuring-kubectl.md @@ -10,6 +10,9 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high Generate a kubeconfig file suitable for authenticating as the `admin` user: +
+GCP + ``` { KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ @@ -33,6 +36,36 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user: } ``` +
+ +
+AWS + +``` +KUBERNETES_PUBLIC_ADDRESS="$(aws elb describe-load-balancers \ + --load-balancer-name kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'LoadBalancerDescriptions[0].DNSName' \ + --output text)" + +kubectl config set-cluster kubernetes-the-hard-way \ + --certificate-authority=ca.pem \ + --embed-certs=true \ + --server="https://$KUBERNETES_PUBLIC_ADDRESS:6443" + +kubectl config set-credentials admin \ + --client-certificate=admin.pem \ + --client-key=admin-key.pem + +kubectl config set-context kubernetes-the-hard-way \ + --cluster=kubernetes-the-hard-way \ + --user=admin + +kubectl config use-context kubernetes-the-hard-way +``` + +
+ ## Verification Check the health of the remote Kubernetes cluster: diff --git a/docs/11-pod-network-routes.md b/docs/11-pod-network-routes.md index f0d39be..bbf13fc 100644 --- a/docs/11-pod-network-routes.md +++ b/docs/11-pod-network-routes.md @@ -12,6 +12,9 @@ In this section you will gather the information required to create routes in the Print the internal IP address and Pod CIDR range for each worker instance: +
+GCP + ``` for instance in worker-0 worker-1 worker-2; do gcloud compute instances describe ${instance} \ @@ -19,6 +22,50 @@ for instance in worker-0 worker-1 worker-2; do done ``` +
+ +
+AWS + +``` +VPC_ID="$(aws ec2 describe-vpcs \ + --filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'Vpcs[0].VpcId' \ + --output text)" +``` +``` +for i in 0 1 2; do + instance_id="$(aws ec2 describe-instances \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag:Name,Values="worker-$i" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[0].Instances[0].InstanceId' \ + --output text)" + + instance_ip="$(aws ec2 describe-instances \ + --instance-ids "$instance_id" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[0].Instances[0].PrivateIpAddress' \ + --output text)" + + instance_ud="$(aws ec2 describe-instance-attribute \ + --instance-id "$instance_id" \ + --attribute userData \ + --profile kubernetes-the-hard-way \ + --query UserData.Value \ + --output text|base64 --decode)" + + pod_cidr="$(echo "$instance_ud"|tr '|' '\n'|grep '^pod-cidr='|cut -d= -f2)" + + echo "$instance_ip $pod_cidr" +done +``` + +
+

+ > output ``` @@ -31,6 +78,9 @@ done Create network routes for each worker instance: +
+GCP + ``` for i in 0 1 2; do gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ @@ -40,14 +90,79 @@ for i in 0 1 2; do done ``` +
+ +
+AWS + +``` +ROUTE_TABLE_ID="$(aws ec2 describe-route-tables \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'RouteTables[0].RouteTableId' \ + --output text)" + +for i in 0 1 2; do + instance_id="$(aws ec2 describe-instances \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag:Name,Values="worker-$i" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[0].Instances[0].InstanceId' \ + --output text)" + + instance_ud="$(aws ec2 describe-instance-attribute \ + --instance-id "$instance_id" \ + --attribute userData \ + --profile kubernetes-the-hard-way \ + --query UserData.Value \ + --output text|base64 --decode)" + + pod_cidr="$(echo "$instance_ud"|tr '|' '\n'|grep '^pod-cidr='|cut -d= -f2)" + + aws ec2 create-route \ + --route-table-id "$ROUTE_TABLE_ID" \ + --destination-cidr-block "$pod_cidr" \ + --instance-id "$instance_id" \ + --profile kubernetes-the-hard-way +done +``` + +
+

+ List the routes in the `kubernetes-the-hard-way` VPC network: +
+GCP + ``` gcloud compute routes list --filter "network: kubernetes-the-hard-way" ``` +
+ +
+AWS + +``` +aws ec2 describe-route-tables \ + --route-table-id "$ROUTE_TABLE_ID" \ + --profile kubernetes-the-hard-way \ + --query 'RouteTables[0].Routes[]|sort_by(@, &DestinationCidrBlock)[].[InstanceId,DestinationCidrBlock,GatewayId]' \ + --output table +``` + +
+

+ > output +
+GCP + ``` NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY default-route-236a40a8bc992b5b kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 @@ -57,4 +172,24 @@ kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0 kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000 ``` +
+ +
+AWS + +``` +---------------------------------------------------------- +| DescribeRouteTables | ++---------------------+-----------------+----------------+ +| None | 0.0.0.0/0 | igw-116a3177 | +| i-0d173dd08280c9f52| 10.200.0.0/24 | None | +| i-0a4ae7e79b0bc3cc9| 10.200.1.0/24 | None | +| i-0a424b69034b9068f| 10.200.2.0/24 | None | +| None | 10.240.0.0/24 | local | ++---------------------+-----------------+----------------+ +``` + +
+

+ Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md) diff --git a/docs/13-smoke-test.md b/docs/13-smoke-test.md index bec472f..2737ed7 100644 --- a/docs/13-smoke-test.md +++ b/docs/13-smoke-test.md @@ -15,6 +15,9 @@ kubectl create secret generic kubernetes-the-hard-way \ Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: +
+GCP + ``` gcloud compute ssh controller-0 \ --command "sudo ETCDCTL_API=3 etcdctl get \ @@ -25,6 +28,41 @@ gcloud compute ssh controller-0 \ /registry/secrets/default/kubernetes-the-hard-way | hexdump -C" ``` +
+ +
+AWS + +``` +VPC_ID="$(aws ec2 describe-vpcs \ + --filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'Vpcs[0].VpcId' \ + --output text)" + +get_ip() { + aws ec2 describe-instances \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag:Name,Values="$1" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[0].Instances[0].PublicIpAddress' \ + --output text +} +``` +``` +ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$(get_ip controller-0)" \ + sudo ETCDCTL_API=3 etcdctl get \ + --endpoints=https://127.0.0.1:2379 \ + --cacert=/etc/etcd/ca.pem \ + --cert=/etc/etcd/kubernetes.pem \ + --key=/etc/etcd/kubernetes-key.pem \ + /registry/secrets/default/kubernetes-the-hard-way|hexdump -C +``` + +
+

+ > output ``` @@ -176,19 +214,62 @@ NODE_PORT=$(kubectl get svc nginx \ Create a firewall rule that allows remote access to the `nginx` node port: +
+GCP + ``` gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ --allow=tcp:${NODE_PORT} \ --network kubernetes-the-hard-way ``` +
+ +
+AWS + +``` +SECURITY_GROUP_ID="$(aws ec2 describe-security-groups \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'SecurityGroups[0].GroupId' \ + --output text)" + +aws ec2 authorize-security-group-ingress \ + --group-id "$SECURITY_GROUP_ID" \ + --protocol tcp \ + --port "$NODE_PORT" \ + --cidr 0.0.0.0/0 \ + --profile kubernetes-the-hard-way +``` + +
+

+ Retrieve the external IP address of a worker instance: +
+GCP + ``` EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ --format 'value(networkInterfaces[0].accessConfigs[0].natIP)') ``` +
+ +
+AWS + +``` +EXTERNAL_IP="$(get_ip worker-0)" +``` + +
+

+ Make an HTTP request using the external IP address and the `nginx` node port: ``` @@ -249,16 +330,54 @@ untrusted 1/1 Running 0 10s 10.200.0.3 Get the node name where the `untrusted` pod is running: +
+GCP + ``` INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}') ``` +
+ +
+AWS + +``` +INSTANCE_PRIVATE_IP="$(kubectl get pod untrusted --output=jsonpath='{.status.hostIP}')" +``` + +
+

+ SSH into the worker node: +
+GCP + ``` gcloud compute ssh ${INSTANCE_NAME} ``` +
+ +
+AWS + +``` +INSTANCE_PUBLIC_IP="$(aws ec2 describe-instances \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=private-ip-address,Values="$INSTANCE_PRIVATE_IP" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[].Instances[].PublicIpAddress' \ + --output text)" + +ssh -i ~/.ssh/kubernetes-the-hard-way "ubuntu@$INSTANCE_PUBLIC_IP" +``` + +
+

+ List the containers running under gVisor: ``` diff --git a/docs/14-cleanup.md b/docs/14-cleanup.md index dc97a3a..34337fb 100644 --- a/docs/14-cleanup.md +++ b/docs/14-cleanup.md @@ -2,6 +2,9 @@ In this lab you will delete the compute resources created during this tutorial. +
+GCP + ## Compute Instances Delete the controller and worker compute instances: @@ -53,3 +56,117 @@ Delete the `kubernetes-the-hard-way` network VPC: gcloud -q compute networks delete kubernetes-the-hard-way } ``` + +
+ +
+AWS + +``` +VPC_ID="$(aws ec2 describe-vpcs \ + --filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'Vpcs[0].VpcId' \ + --output text)" +``` +``` +for host in controller-0 controller-1 controller-2 worker-0 worker-1 worker-2; do + INSTANCE_ID="$(aws ec2 describe-instances \ + --filters \ + Name=vpc-id,Values="$VPC_ID" \ + Name=tag:Name,Values="$host" \ + --profile kubernetes-the-hard-way \ + --query 'Reservations[].Instances[].InstanceId' \ + --output text)" + + aws ec2 terminate-instances --instance-ids "$INSTANCE_ID" --profile kubernetes-the-hard-way +done + +aws iam remove-role-from-instance-profile \ + --instance-profile-name kubernetes-the-hard-way \ + --role-name kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way + +aws iam delete-instance-profile \ + --instance-profile-name kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way + +aws iam delete-role-policy \ + --role-name kubernetes-the-hard-way \ + --policy-name kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way + +aws iam delete-role \ + --role-name kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way + +aws ec2 delete-key-pair \ + --key-name kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way + +# After all ec2 instances have been terminated. +aws elb delete-load-balancer \ + --load-balancer-name kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way + +INTERNET_GATEWAY_ID="$(aws ec2 describe-internet-gateways \ + --filter Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'InternetGateways[0].InternetGatewayId' \ + --output text)" + +aws ec2 detach-internet-gateway \ + --internet-gateway-id "$INTERNET_GATEWAY_ID" \ + --vpc-id "$VPC_ID" \ + --profile kubernetes-the-hard-way + +aws ec2 delete-internet-gateway \ + --internet-gateway-id "$INTERNET_GATEWAY_ID" \ + --profile kubernetes-the-hard-way + +SECURITY_GROUP_ID="$(aws ec2 describe-security-groups \ + --filters Name=group-name,Values=kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'SecurityGroups[0].GroupId' \ + --output text)" + +aws ec2 delete-security-group \ + --group-id "$SECURITY_GROUP_ID" \ + --profile kubernetes-the-hard-way + +SUBNET_ID="$(aws ec2 describe-subnets \ + --filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'Subnets[0].SubnetId' \ + --output text)" + +aws ec2 delete-subnet \ + --subnet-id "$SUBNET_ID" \ + --profile kubernetes-the-hard-way + +ROUTE_TABLE_ID="$(aws ec2 describe-route-tables \ + --filter Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'RouteTables[0].RouteTableId' \ + --output text)" + +aws ec2 delete-route-table \ + --route-table-id "$ROUTE_TABLE_ID" \ + --profile kubernetes-the-hard-way + +aws ec2 delete-vpc \ + --vpc-id "$VPC_ID" \ + --profile kubernetes-the-hard-way + +DHCP_OPTION_SET_ID="$(aws ec2 describe-dhcp-options \ + --filters Name=tag-key,Values=kubernetes.io/cluster/kubernetes-the-hard-way \ + --profile kubernetes-the-hard-way \ + --query 'DhcpOptions[0].DhcpOptionsId' \ + --output text)" + +aws ec2 delete-dhcp-options \ + --dhcp-options-id "$DHCP_OPTION_SET_ID" \ + --profile kubernetes-the-hard-way +``` + +