UPDATED docs/* to be more compliant with markdown linting.

pull/507/head
David J Eddy 2019-11-15 12:12:32 -05:00
parent 5c462220b7
commit 0b24e6bb0e
15 changed files with 335 additions and 449 deletions

55
.gitignore vendored
View File

@ -1,50 +1,7 @@
admin-csr.json
admin-key.pem
admin.csr
admin.pem
admin.kubeconfig
ca-config.json
ca-csr.json
ca-key.pem
ca.csr
ca.pem
encryption-config.yaml
kube-controller-manager-csr.json
kube-controller-manager-key.pem
kube-controller-manager.csr
kube-controller-manager.kubeconfig
kube-controller-manager.pem
kube-scheduler-csr.json
kube-scheduler-key.pem
kube-scheduler.csr
kube-scheduler.kubeconfig
kube-scheduler.pem
kube-proxy-csr.json
kube-proxy-key.pem
kube-proxy.csr
kube-proxy.kubeconfig
kube-proxy.pem
kubernetes-csr.json
kubernetes-key.pem
kubernetes.csr
kubernetes.pem
worker-0-csr.json
worker-0-key.pem
worker-0.csr
worker-0.kubeconfig
worker-0.pem
worker-1-csr.json
worker-1-key.pem
worker-1.csr
worker-1.kubeconfig
worker-1.pem
worker-2-csr.json
worker-2-key.pem
worker-2.csr
worker-2.kubeconfig
worker-2.pem
service-account-key.pem
service-account.csr
service-account.pem
service-account-csr.json
*.swp *.swp
*.csr
*.pem
*.json
*.kubeconfig
encryption-config.yaml

View File

@ -16,7 +16,7 @@ Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to in
Verify the Google Cloud SDK version is 262.0.0 or higher: Verify the Google Cloud SDK version is 262.0.0 or higher:
``` ```sh
gcloud version gcloud version
``` ```
@ -26,25 +26,25 @@ This tutorial assumes a default compute region and zone have been configured.
If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this: If you are using the `gcloud` command-line tool for the first time `init` is the easiest way to do this:
``` ```sh
gcloud init gcloud init
``` ```
Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials: Then be sure to authorize gcloud to access the Cloud Platform with your Google user credentials:
``` ```sh
gcloud auth login gcloud auth login
``` ```
Next set a default compute region and compute zone: Next set a default compute region and compute zone:
``` ```sh
gcloud config set compute/region us-west1 gcloud config set compute/region us-west1
``` ```
Set a default compute zone: Set a default compute zone:
``` ```sh
gcloud config set compute/zone us-west1-c gcloud config set compute/zone us-west1-c
``` ```

View File

@ -2,7 +2,6 @@
In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl). In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl).
## Install CFSSL ## Install CFSSL
The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates. The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
@ -11,61 +10,62 @@ Download and install `cfssl` and `cfssljson`:
### OS X ### OS X
``` ```sh
curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssl curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssl
curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssljson curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssljson
``` ```
``` ```sh
chmod +x cfssl cfssljson chmod +x cfssl cfssljson
``` ```
``` ```sh
sudo mv cfssl cfssljson /usr/local/bin/ sudo mv cfssl cfssljson /usr/local/bin/
``` ```
Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option: Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
``` ```sh
brew install cfssl brew install cfssl
``` ```
### Linux ### Linux
``` ```sh
wget -q --show-progress --https-only --timestamping \ wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssl \ https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssl \
https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssljson
``` ```
``` ```sh
chmod +x cfssl cfssljson chmod +x cfssl cfssljson
``` ```
``` ```sh
sudo mv cfssl cfssljson /usr/local/bin/ sudo mv cfssl cfssljson /usr/local/bin/
``` ```
### Verification ### cfssl Verification
Verify `cfssl` and `cfssljson` version 1.3.4 or higher is installed: Verify `cfssl` and `cfssljson` version 1.3.4 or higher is installed:
``` ```sh
cfssl version cfssl version
``` ```
> output > output
``` ```sh
Version: 1.3.4 Version: 1.3.4
Revision: dev Revision: dev
Runtime: go1.13 Runtime: go1.13
``` ```
``` ```sh
cfssljson --version cfssljson --version
``` ```
```
```sh
Version: 1.3.4 Version: 1.3.4
Revision: dev Revision: dev
Runtime: go1.13 Runtime: go1.13
@ -75,45 +75,45 @@ Runtime: go1.13
The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries: The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
### OS X ### kubectl on OS X
``` ```sh
curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/darwin/amd64/kubectl curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/darwin/amd64/kubectl
``` ```
``` ```sh
chmod +x kubectl chmod +x kubectl
``` ```
``` ```sh
sudo mv kubectl /usr/local/bin/ sudo mv kubectl /usr/local/bin/
``` ```
### Linux ### kubectl on Linux
``` ```sh
wget https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl wget https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl
``` ```
``` ```sh
chmod +x kubectl chmod +x kubectl
``` ```
``` ```sh
sudo mv kubectl /usr/local/bin/ sudo mv kubectl /usr/local/bin/
``` ```
### Verification ### kubectl Verification
Verify `kubectl` version 1.15.3 or higher is installed: Verify `kubectl` version 1.15.3 or higher is installed:
``` ```sh
kubectl version --client kubectl version --client
``` ```
> output > output
``` ```sh
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
``` ```

View File

@ -16,7 +16,7 @@ In this section a dedicated [Virtual Private Cloud](https://cloud.google.com/com
Create the `kubernetes-the-hard-way` custom VPC network: Create the `kubernetes-the-hard-way` custom VPC network:
``` ```sh
gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom
``` ```
@ -24,7 +24,7 @@ A [subnet](https://cloud.google.com/compute/docs/vpc/#vpc_networks_and_subnets)
Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network: Create the `kubernetes` subnet in the `kubernetes-the-hard-way` VPC network:
``` ```sh
gcloud compute networks subnets create kubernetes \ gcloud compute networks subnets create kubernetes \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
--range 10.240.0.0/24 --range 10.240.0.0/24
@ -36,7 +36,7 @@ gcloud compute networks subnets create kubernetes \
Create a firewall rule that allows internal communication across all protocols: Create a firewall rule that allows internal communication across all protocols:
``` ```sh
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
--allow tcp,udp,icmp \ --allow tcp,udp,icmp \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
@ -45,7 +45,7 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
Create a firewall rule that allows external SSH, ICMP, and HTTPS: Create a firewall rule that allows external SSH, ICMP, and HTTPS:
``` ```sh
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
--allow tcp:22,tcp:6443,icmp \ --allow tcp:22,tcp:6443,icmp \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
@ -56,13 +56,13 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
List the firewall rules in the `kubernetes-the-hard-way` VPC network: List the firewall rules in the `kubernetes-the-hard-way` VPC network:
``` ```sh
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
``` ```
> output > output
``` ```sh
NAME NETWORK DIRECTION PRIORITY ALLOW DENY NAME NETWORK DIRECTION PRIORITY ALLOW DENY
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp
@ -72,20 +72,20 @@ kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers: Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
``` ```sh
gcloud compute addresses create kubernetes-the-hard-way \ gcloud compute addresses create kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) --region $(gcloud config get-value compute/region)
``` ```
Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region: Verify the `kubernetes-the-hard-way` static IP address was created in your default compute region:
``` ```sh
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
``` ```
> output > output
``` ```sh
NAME REGION ADDRESS STATUS NAME REGION ADDRESS STATUS
kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED kubernetes-the-hard-way us-west1 XX.XXX.XXX.XX RESERVED
``` ```
@ -98,7 +98,7 @@ The compute instances in this lab will be provisioned using [Ubuntu Server](http
Create three compute instances which will host the Kubernetes control plane: Create three compute instances which will host the Kubernetes control plane:
``` ```sh
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute instances create controller-${i} \ gcloud compute instances create controller-${i} \
--async \ --async \
@ -122,7 +122,7 @@ Each worker instance requires a pod subnet allocation from the Kubernetes cluste
Create three compute instances which will host the Kubernetes worker nodes: Create three compute instances which will host the Kubernetes worker nodes:
``` ```sh
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute instances create worker-${i} \ gcloud compute instances create worker-${i} \
--async \ --async \
@ -143,13 +143,13 @@ done
List the compute instances in your default compute zone: List the compute instances in your default compute zone:
``` ```sh
gcloud compute instances list gcloud compute instances list
``` ```
> output > output
``` ```sh
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING controller-0 us-west1-c n1-standard-1 10.240.0.10 XX.XXX.XXX.XXX RUNNING
controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING controller-1 us-west1-c n1-standard-1 10.240.0.11 XX.XXX.X.XX RUNNING
@ -165,13 +165,13 @@ SSH will be used to configure the controller and worker instances. When connecti
Test SSH access to the `controller-0` compute instances: Test SSH access to the `controller-0` compute instances:
``` ```sh
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue: If this is your first time connecting to a compute instance SSH keys will be generated for you. Enter a passphrase at the prompt to continue:
``` ```sh
WARNING: The public SSH key file for gcloud does not exist. WARNING: The public SSH key file for gcloud does not exist.
WARNING: The private SSH key file for gcloud does not exist. WARNING: The private SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud. WARNING: You do not have an SSH key for gcloud.
@ -183,7 +183,7 @@ Enter same passphrase again:
At this point the generated SSH keys will be uploaded and stored in your project: At this point the generated SSH keys will be uploaded and stored in your project:
``` ```sh
Your identification has been saved in /home/$USER/.ssh/google_compute_engine. Your identification has been saved in /home/$USER/.ssh/google_compute_engine.
Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub. Your public key has been saved in /home/$USER/.ssh/google_compute_engine.pub.
The key fingerprint is: The key fingerprint is:
@ -207,21 +207,21 @@ Waiting for SSH key to propagate.
After the SSH keys have been updated you'll be logged into the `controller-0` instance: After the SSH keys have been updated you'll be logged into the `controller-0` instance:
``` ```sh
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1042-gcp x86_64) Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1042-gcp x86_64)
... ...
Last login: Sun Sept 14 14:34:27 2019 from XX.XXX.XXX.XX Last login: Sun Sept 14 14:34:27 2019 from XX.XXX.XXX.XX
``` ```
Type `exit` at the prompt to exit the `controller-0` compute instance: Type `exit` at the prompt to exit the `controller-0` compute instance:
``` ```sh
$USER@controller-0:~$ exit $USER@controller-0:~$ exit
``` ```
> output > output
``` ```sh
logout logout
Connection to XX.XXX.XXX.XXX closed Connection to XX.XXX.XXX.XXX closed
``` ```

View File

@ -8,9 +8,7 @@ In this section you will provision a Certificate Authority that can be used to g
Generate the CA configuration file, certificate, and private key: Generate the CA configuration file, certificate, and private key:
``` ```sh
{
cat > ca-config.json <<EOF cat > ca-config.json <<EOF
{ {
"signing": { "signing": {
@ -47,13 +45,11 @@ cat > ca-csr.json <<EOF
EOF EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
``` ```
Results: Results:
``` ```sh
ca-key.pem ca-key.pem
ca.pem ca.pem
``` ```
@ -66,9 +62,7 @@ In this section you will generate client and server certificates for each Kubern
Generate the `admin` client certificate and private key: Generate the `admin` client certificate and private key:
``` ```sh
{
cat > admin-csr.json <<EOF cat > admin-csr.json <<EOF
{ {
"CN": "admin", "CN": "admin",
@ -94,13 +88,11 @@ cfssl gencert \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
admin-csr.json | cfssljson -bare admin admin-csr.json | cfssljson -bare admin
}
``` ```
Results: Results:
``` ```sh
admin-key.pem admin-key.pem
admin.pem admin.pem
``` ```
@ -111,7 +103,7 @@ Kubernetes uses a [special-purpose authorization mode](https://kubernetes.io/doc
Generate a certificate and private key for each Kubernetes worker node: Generate a certificate and private key for each Kubernetes worker node:
``` ```sh
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF cat > ${instance}-csr.json <<EOF
{ {
@ -150,7 +142,7 @@ done
Results: Results:
``` ```sh
worker-0-key.pem worker-0-key.pem
worker-0.pem worker-0.pem
worker-1-key.pem worker-1-key.pem
@ -163,9 +155,7 @@ worker-2.pem
Generate the `kube-controller-manager` client certificate and private key: Generate the `kube-controller-manager` client certificate and private key:
``` ```sh
{
cat > kube-controller-manager-csr.json <<EOF cat > kube-controller-manager-csr.json <<EOF
{ {
"CN": "system:kube-controller-manager", "CN": "system:kube-controller-manager",
@ -191,25 +181,20 @@ cfssl gencert \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
``` ```
Results: Results:
``` ```sh
kube-controller-manager-key.pem kube-controller-manager-key.pem
kube-controller-manager.pem kube-controller-manager.pem
``` ```
### The Kube Proxy Client Certificate ### The Kube Proxy Client Certificate
Generate the `kube-proxy` client certificate and private key: Generate the `kube-proxy` client certificate and private key:
``` ```sh
{
cat > kube-proxy-csr.json <<EOF cat > kube-proxy-csr.json <<EOF
{ {
"CN": "system:kube-proxy", "CN": "system:kube-proxy",
@ -235,13 +220,11 @@ cfssl gencert \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy kube-proxy-csr.json | cfssljson -bare kube-proxy
}
``` ```
Results: Results:
``` ```sh
kube-proxy-key.pem kube-proxy-key.pem
kube-proxy.pem kube-proxy.pem
``` ```
@ -250,9 +233,7 @@ kube-proxy.pem
Generate the `kube-scheduler` client certificate and private key: Generate the `kube-scheduler` client certificate and private key:
``` ```sh
{
cat > kube-scheduler-csr.json <<EOF cat > kube-scheduler-csr.json <<EOF
{ {
"CN": "system:kube-scheduler", "CN": "system:kube-scheduler",
@ -278,27 +259,22 @@ cfssl gencert \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}
``` ```
Results: Results:
``` ```sh
kube-scheduler-key.pem kube-scheduler-key.pem
kube-scheduler.pem kube-scheduler.pem
``` ```
### The Kubernetes API Server Certificate ### The Kubernetes API Server Certificate
The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients. The `kubernetes-the-hard-way` static IP address will be included in the list of subject alternative names for the Kubernetes API Server certificate. This will ensure the certificate can be validated by remote clients.
Generate the Kubernetes API Server certificate and private key: Generate the Kubernetes API Server certificate and private key:
``` ```sh
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--format 'value(address)') --format 'value(address)')
@ -331,15 +307,13 @@ cfssl gencert \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \ -hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \ -profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
}
``` ```
> The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server) lab. > The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server) lab.
Results: Results:
``` ```sh
kubernetes-key.pem kubernetes-key.pem
kubernetes.pem kubernetes.pem
``` ```
@ -350,9 +324,7 @@ The Kubernetes Controller Manager leverages a key pair to generate and sign serv
Generate the `service-account` certificate and private key: Generate the `service-account` certificate and private key:
``` ```sh
{
cat > service-account-csr.json <<EOF cat > service-account-csr.json <<EOF
{ {
"CN": "service-accounts", "CN": "service-accounts",
@ -378,13 +350,11 @@ cfssl gencert \
-config=ca-config.json \ -config=ca-config.json \
-profile=kubernetes \ -profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account service-account-csr.json | cfssljson -bare service-account
}
``` ```
Results: Results:
``` ```sh
service-account-key.pem service-account-key.pem
service-account.pem service-account.pem
``` ```
@ -394,7 +364,7 @@ service-account.pem
Copy the appropriate certificates and private keys to each worker instance: Copy the appropriate certificates and private keys to each worker instance:
``` ```sh
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/ gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done done
@ -402,7 +372,7 @@ done
Copy the appropriate certificates and private keys to each controller instance: Copy the appropriate certificates and private keys to each controller instance:
``` ```sh
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem ${instance}:~/ service-account-key.pem service-account.pem ${instance}:~/

View File

@ -12,7 +12,7 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
Retrieve the `kubernetes-the-hard-way` static IP address: Retrieve the `kubernetes-the-hard-way` static IP address:
``` ```sh
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--format 'value(address)') --format 'value(address)')
@ -26,7 +26,7 @@ When generating kubeconfig files for Kubelets the client certificate matching th
Generate a kubeconfig file for each worker node: Generate a kubeconfig file for each worker node:
``` ```sh
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
@ -51,7 +51,7 @@ done
Results: Results:
``` ```sh
worker-0.kubeconfig worker-0.kubeconfig
worker-1.kubeconfig worker-1.kubeconfig
worker-2.kubeconfig worker-2.kubeconfig
@ -61,8 +61,7 @@ worker-2.kubeconfig
Generate a kubeconfig file for the `kube-proxy` service: Generate a kubeconfig file for the `kube-proxy` service:
``` ```sh
{
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--embed-certs=true \ --embed-certs=true \
@ -81,12 +80,11 @@ Generate a kubeconfig file for the `kube-proxy` service:
--kubeconfig=kube-proxy.kubeconfig --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
``` ```
Results: Results:
``` ```sh
kube-proxy.kubeconfig kube-proxy.kubeconfig
``` ```
@ -94,8 +92,7 @@ kube-proxy.kubeconfig
Generate a kubeconfig file for the `kube-controller-manager` service: Generate a kubeconfig file for the `kube-controller-manager` service:
``` ```sh
{
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--embed-certs=true \ --embed-certs=true \
@ -114,12 +111,11 @@ Generate a kubeconfig file for the `kube-controller-manager` service:
--kubeconfig=kube-controller-manager.kubeconfig --kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
``` ```
Results: Results:
``` ```sh
kube-controller-manager.kubeconfig kube-controller-manager.kubeconfig
``` ```
@ -128,8 +124,7 @@ kube-controller-manager.kubeconfig
Generate a kubeconfig file for the `kube-scheduler` service: Generate a kubeconfig file for the `kube-scheduler` service:
``` ```sh
{
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
--embed-certs=true \ --embed-certs=true \
@ -148,12 +143,11 @@ Generate a kubeconfig file for the `kube-scheduler` service:
--kubeconfig=kube-scheduler.kubeconfig --kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
``` ```
Results: Results:
``` ```sh
kube-scheduler.kubeconfig kube-scheduler.kubeconfig
``` ```
@ -161,7 +155,7 @@ kube-scheduler.kubeconfig
Generate a kubeconfig file for the `admin` user: Generate a kubeconfig file for the `admin` user:
``` ```sh
{ {
kubectl config set-cluster kubernetes-the-hard-way \ kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \ --certificate-authority=ca.pem \
@ -186,18 +180,15 @@ Generate a kubeconfig file for the `admin` user:
Results: Results:
``` ```sh
admin.kubeconfig admin.kubeconfig
``` ```
##
## Distribute the Kubernetes Configuration Files ## Distribute the Kubernetes Configuration Files
Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance: Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
``` ```sh
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done done
@ -205,7 +196,7 @@ done
Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance: Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
``` ```sh
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/ gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done done

View File

@ -8,7 +8,7 @@ In this lab you will generate an encryption key and an [encryption config](https
Generate an encryption key: Generate an encryption key:
``` ```sh
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64) ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
``` ```
@ -16,7 +16,7 @@ ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
Create the `encryption-config.yaml` encryption config file: Create the `encryption-config.yaml` encryption config file:
``` ```sh
cat > encryption-config.yaml <<EOF cat > encryption-config.yaml <<EOF
kind: EncryptionConfig kind: EncryptionConfig
apiVersion: v1 apiVersion: v1
@ -34,7 +34,7 @@ EOF
Copy the `encryption-config.yaml` encryption config file to each controller instance: Copy the `encryption-config.yaml` encryption config file to each controller instance:
``` ```sh
for instance in controller-0 controller-1 controller-2; do for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/ gcloud compute scp encryption-config.yaml ${instance}:~/
done done

View File

@ -6,7 +6,7 @@ Kubernetes components are stateless and store cluster state in [etcd](https://gi
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
``` ```sh
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
@ -20,45 +20,41 @@ gcloud compute ssh controller-0
Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project: Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project:
``` ```sh
wget -q --show-progress --https-only --timestamping \ wget -q --show-progress --https-only --timestamping \
"https://github.com/etcd-io/etcd/releases/download/v3.4.0/etcd-v3.4.0-linux-amd64.tar.gz" "https://github.com/etcd-io/etcd/releases/download/v3.4.0/etcd-v3.4.0-linux-amd64.tar.gz"
``` ```
Extract and install the `etcd` server and the `etcdctl` command line utility: Extract and install the `etcd` server and the `etcdctl` command line utility:
``` ```sh
{
tar -xvf etcd-v3.4.0-linux-amd64.tar.gz tar -xvf etcd-v3.4.0-linux-amd64.tar.gz
sudo mv etcd-v3.4.0-linux-amd64/etcd* /usr/local/bin/ sudo mv etcd-v3.4.0-linux-amd64/etcd* /usr/local/bin/
}
``` ```
### Configure the etcd Server ### Configure the etcd Server
``` ```sh
{
sudo mkdir -p /etc/etcd /var/lib/etcd sudo mkdir -p /etc/etcd /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/ sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
}
``` ```
The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance: The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance:
``` ```sh
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance: Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
``` ```sh
ETCD_NAME=$(hostname -s) ETCD_NAME=$(hostname -s)
``` ```
Create the `etcd.service` systemd unit file: Create the `etcd.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/etcd.service cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit] [Unit]
Description=etcd Description=etcd
@ -94,12 +90,10 @@ EOF
### Start the etcd Server ### Start the etcd Server
``` ```sh
{
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable etcd sudo systemctl enable etcd
sudo systemctl start etcd sudo systemctl start etcd
}
``` ```
> Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`. > Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
@ -108,7 +102,7 @@ EOF
List the etcd cluster members: List the etcd cluster members:
``` ```sh
sudo ETCDCTL_API=3 etcdctl member list \ sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \ --endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.pem \ --cacert=/etc/etcd/ca.pem \
@ -118,7 +112,7 @@ sudo ETCDCTL_API=3 etcdctl member list \
> output > output
``` ```sh
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379 3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379 f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379 ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379

View File

@ -6,7 +6,7 @@ In this lab you will bootstrap the Kubernetes control plane across three compute
The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example: The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `gcloud` command. Example:
``` ```sh
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
@ -18,7 +18,7 @@ gcloud compute ssh controller-0
Create the Kubernetes configuration directory: Create the Kubernetes configuration directory:
``` ```sh
sudo mkdir -p /etc/kubernetes/config sudo mkdir -p /etc/kubernetes/config
``` ```
@ -26,7 +26,7 @@ sudo mkdir -p /etc/kubernetes/config
Download the official Kubernetes release binaries: Download the official Kubernetes release binaries:
``` ```sh
wget -q --show-progress --https-only --timestamping \ wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-apiserver" \ "https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-controller-manager" \ "https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-controller-manager" \
@ -36,35 +36,31 @@ wget -q --show-progress --https-only --timestamping \
Install the Kubernetes binaries: Install the Kubernetes binaries:
``` ```sh
{
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
}
``` ```
### Configure the Kubernetes API Server ### Configure the Kubernetes API Server
``` ```sh
{
sudo mkdir -p /var/lib/kubernetes/ sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \ service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/ encryption-config.yaml /var/lib/kubernetes/
}
``` ```
The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance: The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
``` ```sh
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \ INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip) http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
``` ```
Create the `kube-apiserver.service` systemd unit file: Create the `kube-apiserver.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit] [Unit]
Description=Kubernetes API Server Description=Kubernetes API Server
@ -112,13 +108,13 @@ EOF
Move the `kube-controller-manager` kubeconfig into place: Move the `kube-controller-manager` kubeconfig into place:
``` ```sh
sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/ sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
``` ```
Create the `kube-controller-manager.service` systemd unit file: Create the `kube-controller-manager.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit] [Unit]
Description=Kubernetes Controller Manager Description=Kubernetes Controller Manager
@ -150,13 +146,13 @@ EOF
Move the `kube-scheduler` kubeconfig into place: Move the `kube-scheduler` kubeconfig into place:
``` ```sh
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/ sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
``` ```
Create the `kube-scheduler.yaml` configuration file: Create the `kube-scheduler.yaml` configuration file:
``` ```sh
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1 apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration kind: KubeSchedulerConfiguration
@ -169,7 +165,7 @@ EOF
Create the `kube-scheduler.service` systemd unit file: Create the `kube-scheduler.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit] [Unit]
Description=Kubernetes Scheduler Description=Kubernetes Scheduler
@ -189,12 +185,10 @@ EOF
### Start the Controller Services ### Start the Controller Services
``` ```sh
{
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
}
``` ```
> Allow up to 10 seconds for the Kubernetes API Server to fully initialize. > Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
@ -207,12 +201,12 @@ A [Google Network Load Balancer](https://cloud.google.com/compute/docs/load-bala
Install a basic web server to handle HTTP health checks: Install a basic web server to handle HTTP health checks:
``` ```sh
sudo apt-get update sudo apt-get update
sudo apt-get install -y nginx sudo apt-get install -y nginx
``` ```
``` ```sh
cat > kubernetes.default.svc.cluster.local <<EOF cat > kubernetes.default.svc.cluster.local <<EOF
server { server {
listen 80; listen 80;
@ -224,32 +218,30 @@ server {
} }
} }
EOF EOF
``` ```sh
``` ```sh
{
sudo mv kubernetes.default.svc.cluster.local \ sudo mv kubernetes.default.svc.cluster.local \
/etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-available/kubernetes.default.svc.cluster.local
sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/ sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
}
``` ```
``` ```sh
sudo systemctl restart nginx sudo systemctl restart nginx
``` ```
``` ```sh
sudo systemctl enable nginx sudo systemctl enable nginx
``` ```
### Verification ### Component Verification
``` ```sh
kubectl get componentstatuses --kubeconfig admin.kubeconfig kubectl get componentstatuses --kubeconfig admin.kubeconfig
``` ```
``` ```sh
NAME STATUS MESSAGE ERROR NAME STATUS MESSAGE ERROR
controller-manager Healthy ok controller-manager Healthy ok
scheduler Healthy ok scheduler Healthy ok
@ -260,11 +252,11 @@ etcd-1 Healthy {"health": "true"}
Test the nginx HTTP health check proxy: Test the nginx HTTP health check proxy:
``` ```sh
curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
``` ```
``` ```sh
HTTP/1.1 200 OK HTTP/1.1 200 OK
Server: nginx/1.14.0 (Ubuntu) Server: nginx/1.14.0 (Ubuntu)
Date: Sat, 14 Sep 2019 18:34:11 GMT Date: Sat, 14 Sep 2019 18:34:11 GMT
@ -286,13 +278,13 @@ In this section you will configure RBAC permissions to allow the Kubernetes API
The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes. The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
``` ```sh
gcloud compute ssh controller-0 gcloud compute ssh controller-0
``` ```
Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
``` ```sh
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole kind: ClusterRole
@ -320,7 +312,7 @@ The Kubernetes API Server authenticates to the Kubelet as the `kubernetes` user
Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user: Bind the `system:kube-apiserver-to-kubelet` ClusterRole to the `kubernetes` user:
``` ```sh
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f - cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding kind: ClusterRoleBinding
@ -349,8 +341,7 @@ In this section you will provision an external load balancer to front the Kubern
Create the external load balancer network resources: Create the external load balancer network resources:
``` ```sh
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--format 'value(address)') --format 'value(address)')
@ -376,16 +367,15 @@ Create the external load balancer network resources:
--ports 6443 \ --ports 6443 \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool --target-pool kubernetes-target-pool
}
``` ```
### Verification ### Instance Verification
> The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**. > The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
Retrieve the `kubernetes-the-hard-way` static IP address: Retrieve the `kubernetes-the-hard-way` static IP address:
``` ```sh
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--format 'value(address)') --format 'value(address)')
@ -393,14 +383,13 @@ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-har
Make a HTTP request for the Kubernetes version info: Make a HTTP request for the Kubernetes version info:
``` ```sh
curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
``` ```
> output > output
``` ```sh
{
"major": "1", "major": "1",
"minor": "15", "minor": "15",
"gitVersion": "v1.15.3", "gitVersion": "v1.15.3",
@ -410,7 +399,6 @@ curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
"goVersion": "go1.12.9", "goVersion": "go1.12.9",
"compiler": "gc", "compiler": "gc",
"platform": "linux/amd64" "platform": "linux/amd64"
}
``` ```
Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md) Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md)

View File

@ -6,7 +6,7 @@ In this lab you will bootstrap three Kubernetes worker nodes. The following comp
The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example: The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `gcloud` command. Example:
``` ```sh
gcloud compute ssh worker-0 gcloud compute ssh worker-0
``` ```
@ -18,11 +18,9 @@ gcloud compute ssh worker-0
Install the OS dependencies: Install the OS dependencies:
``` ```sh
{
sudo apt-get update sudo apt-get update
sudo apt-get -y install socat conntrack ipset sudo apt-get -y install socat conntrack ipset
}
``` ```
> The socat binary enables support for the `kubectl port-forward` command. > The socat binary enables support for the `kubectl port-forward` command.
@ -33,13 +31,13 @@ By default the kubelet will fail to start if [swap](https://help.ubuntu.com/comm
Verify if swap is enabled: Verify if swap is enabled:
``` ```sh
sudo swapon --show sudo swapon --show
``` ```
If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately: If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
``` ```sh
sudo swapoff -a sudo swapoff -a
``` ```
@ -47,7 +45,7 @@ sudo swapoff -a
### Download and Install Worker Binaries ### Download and Install Worker Binaries
``` ```sh
wget -q --show-progress --https-only --timestamping \ wget -q --show-progress --https-only --timestamping \
https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \ https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \
https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \ https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \
@ -60,7 +58,7 @@ wget -q --show-progress --https-only --timestamping \
Create the installation directories: Create the installation directories:
``` ```sh
sudo mkdir -p \ sudo mkdir -p \
/etc/cni/net.d \ /etc/cni/net.d \
/opt/cni/bin \ /opt/cni/bin \
@ -72,8 +70,7 @@ sudo mkdir -p \
Install the worker binaries: Install the worker binaries:
``` ```sh
{
mkdir containerd mkdir containerd
tar -xvf crictl-v1.15.0-linux-amd64.tar.gz tar -xvf crictl-v1.15.0-linux-amd64.tar.gz
tar -xvf containerd-1.2.9.linux-amd64.tar.gz -C containerd tar -xvf containerd-1.2.9.linux-amd64.tar.gz -C containerd
@ -82,21 +79,20 @@ Install the worker binaries:
chmod +x crictl kubectl kube-proxy kubelet runc chmod +x crictl kubectl kube-proxy kubelet runc
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/ sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
sudo mv containerd/bin/* /bin/ sudo mv containerd/bin/* /bin/
}
``` ```
### Configure CNI Networking ### Configure CNI Networking
Retrieve the Pod CIDR range for the current compute instance: Retrieve the Pod CIDR range for the current compute instance:
``` ```sh
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \ POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr) http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)
``` ```
Create the `bridge` network configuration file: Create the `bridge` network configuration file:
``` ```sh
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{ {
"cniVersion": "0.3.1", "cniVersion": "0.3.1",
@ -118,7 +114,7 @@ EOF
Create the `loopback` network configuration file: Create the `loopback` network configuration file:
``` ```sh
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{ {
"cniVersion": "0.3.1", "cniVersion": "0.3.1",
@ -132,11 +128,11 @@ EOF
Create the `containerd` configuration file: Create the `containerd` configuration file:
``` ```sh
sudo mkdir -p /etc/containerd/ sudo mkdir -p /etc/containerd/
``` ```
``` ```sh
cat << EOF | sudo tee /etc/containerd/config.toml cat << EOF | sudo tee /etc/containerd/config.toml
[plugins] [plugins]
[plugins.cri.containerd] [plugins.cri.containerd]
@ -150,7 +146,7 @@ EOF
Create the `containerd.service` systemd unit file: Create the `containerd.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/containerd.service cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit] [Unit]
Description=containerd container runtime Description=containerd container runtime
@ -176,17 +172,15 @@ EOF
### Configure the Kubelet ### Configure the Kubelet
``` ```sh
{
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/ sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.pem /var/lib/kubernetes/ sudo mv ca.pem /var/lib/kubernetes/
}
``` ```
Create the `kubelet-config.yaml` configuration file: Create the `kubelet-config.yaml` configuration file:
``` ```sh
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1 apiVersion: kubelet.config.k8s.io/v1beta1
@ -214,7 +208,7 @@ EOF
Create the `kubelet.service` systemd unit file: Create the `kubelet.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit] [Unit]
Description=Kubernetes Kubelet Description=Kubernetes Kubelet
@ -242,13 +236,13 @@ EOF
### Configure the Kubernetes Proxy ### Configure the Kubernetes Proxy
``` ```sh
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
``` ```
Create the `kube-proxy-config.yaml` configuration file: Create the `kube-proxy-config.yaml` configuration file:
``` ```sh
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1 apiVersion: kubeproxy.config.k8s.io/v1alpha1
@ -261,7 +255,7 @@ EOF
Create the `kube-proxy.service` systemd unit file: Create the `kube-proxy.service` systemd unit file:
``` ```sh
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit] [Unit]
Description=Kubernetes Kube Proxy Description=Kubernetes Kube Proxy
@ -280,12 +274,10 @@ EOF
### Start the Worker Services ### Start the Worker Services
``` ```sh
{
sudo systemctl daemon-reload sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy sudo systemctl start containerd kubelet kube-proxy
}
``` ```
> Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`. > Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
@ -296,14 +288,14 @@ EOF
List the registered Kubernetes nodes: List the registered Kubernetes nodes:
``` ```sh
gcloud compute ssh controller-0 \ gcloud compute ssh controller-0 \
--command "kubectl get nodes --kubeconfig admin.kubeconfig" --command "kubectl get nodes --kubeconfig admin.kubeconfig"
``` ```
> output > output
``` ```sh
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 15s v1.15.3 worker-0 Ready <none> 15s v1.15.3
worker-1 Ready <none> 15s v1.15.3 worker-1 Ready <none> 15s v1.15.3

View File

@ -10,8 +10,7 @@ Each kubeconfig requires a Kubernetes API Server to connect to. To support high
Generate a kubeconfig file suitable for authenticating as the `admin` user: Generate a kubeconfig file suitable for authenticating as the `admin` user:
``` ```sh
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \ KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \ --region $(gcloud config get-value compute/region) \
--format 'value(address)') --format 'value(address)')
@ -30,20 +29,19 @@ Generate a kubeconfig file suitable for authenticating as the `admin` user:
--user=admin --user=admin
kubectl config use-context kubernetes-the-hard-way kubectl config use-context kubernetes-the-hard-way
}
``` ```
## Verification ## Verification
Check the health of the remote Kubernetes cluster: Check the health of the remote Kubernetes cluster:
``` ```sh
kubectl get componentstatuses kubectl get componentstatuses
``` ```
> output > output
``` ```sh
NAME STATUS MESSAGE ERROR NAME STATUS MESSAGE ERROR
controller-manager Healthy ok controller-manager Healthy ok
scheduler Healthy ok scheduler Healthy ok
@ -54,13 +52,13 @@ etcd-0 Healthy {"health":"true"}
List the nodes in the remote Kubernetes cluster: List the nodes in the remote Kubernetes cluster:
``` ```sh
kubectl get nodes kubectl get nodes
``` ```
> output > output
``` ```sh
NAME STATUS ROLES AGE VERSION NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 2m9s v1.15.3 worker-0 Ready <none> 2m9s v1.15.3
worker-1 Ready <none> 2m9s v1.15.3 worker-1 Ready <none> 2m9s v1.15.3

View File

@ -12,7 +12,7 @@ In this section you will gather the information required to create routes in the
Print the internal IP address and Pod CIDR range for each worker instance: Print the internal IP address and Pod CIDR range for each worker instance:
``` ```sh
for instance in worker-0 worker-1 worker-2; do for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \ gcloud compute instances describe ${instance} \
--format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)' --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
@ -21,7 +21,7 @@ done
> output > output
``` ```sh
10.240.0.20 10.200.0.0/24 10.240.0.20 10.200.0.0/24
10.240.0.21 10.200.1.0/24 10.240.0.21 10.200.1.0/24
10.240.0.22 10.200.2.0/24 10.240.0.22 10.200.2.0/24
@ -31,7 +31,7 @@ done
Create network routes for each worker instance: Create network routes for each worker instance:
``` ```sh
for i in 0 1 2; do for i in 0 1 2; do
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
--network kubernetes-the-hard-way \ --network kubernetes-the-hard-way \
@ -42,13 +42,13 @@ done
List the routes in the `kubernetes-the-hard-way` VPC network: List the routes in the `kubernetes-the-hard-way` VPC network:
``` ```sh
gcloud compute routes list --filter "network: kubernetes-the-hard-way" gcloud compute routes list --filter "network: kubernetes-the-hard-way"
``` ```
> output > output
``` ```sh
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
default-route-081879136902de56 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000 default-route-081879136902de56 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 1000
default-route-55199a5aa126d7aa kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 default-route-55199a5aa126d7aa kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000

View File

@ -6,13 +6,13 @@ In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts
Deploy the `coredns` cluster add-on: Deploy the `coredns` cluster add-on:
``` ```sh
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
``` ```
> output > output
``` ```sh
serviceaccount/coredns created serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
@ -23,13 +23,13 @@ service/kube-dns created
List the pods created by the `kube-dns` deployment: List the pods created by the `kube-dns` deployment:
``` ```sh
kubectl get pods -l k8s-app=kube-dns -n kube-system kubectl get pods -l k8s-app=kube-dns -n kube-system
``` ```
> output > output
``` ```sh
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
coredns-699f8ddd77-94qv9 1/1 Running 0 20s coredns-699f8ddd77-94qv9 1/1 Running 0 20s
coredns-699f8ddd77-gtcgb 1/1 Running 0 20s coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
@ -39,38 +39,38 @@ coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
Create a `busybox` deployment: Create a `busybox` deployment:
``` ```sh
kubectl run --generator=run-pod/v1 busybox --image=busybox:1.28 --command -- sleep 3600 kubectl run --generator=run-pod/v1 busybox --image=busybox:1.28 --command -- sleep 3600
``` ```
List the pod created by the `busybox` deployment: List the pod created by the `busybox` deployment:
``` ```sh
kubectl get pods -l run=busybox kubectl get pods -l run=busybox
``` ```
> output > output
``` ```sh
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 3s busybox 1/1 Running 0 3s
``` ```
Retrieve the full name of the `busybox` pod: Retrieve the full name of the `busybox` pod:
``` ```sh
POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
``` ```
Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod: Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
``` ```sh
kubectl exec -ti $POD_NAME -- nslookup kubernetes kubectl exec -ti $POD_NAME -- nslookup kubernetes
``` ```
> output > output
``` ```sh
Server: 10.32.0.10 Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local

View File

@ -8,14 +8,14 @@ In this section you will verify the ability to [encrypt secret data at rest](htt
Create a generic secret: Create a generic secret:
``` ```sh
kubectl create secret generic kubernetes-the-hard-way \ kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata" --from-literal="mykey=mydata"
``` ```
Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd: Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
``` ```sh
gcloud compute ssh controller-0 \ gcloud compute ssh controller-0 \
--command "sudo ETCDCTL_API=3 etcdctl get \ --command "sudo ETCDCTL_API=3 etcdctl get \
--endpoints=https://127.0.0.1:2379 \ --endpoints=https://127.0.0.1:2379 \
@ -27,7 +27,7 @@ gcloud compute ssh controller-0 \
> output > output
``` ```sh
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret| 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern| 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa| 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
@ -53,19 +53,19 @@ In this section you will verify the ability to create and manage [Deployments](h
Create a deployment for the [nginx](https://nginx.org/en/) web server: Create a deployment for the [nginx](https://nginx.org/en/) web server:
``` ```sh
kubectl create deployment nginx --image=nginx kubectl create deployment nginx --image=nginx
``` ```
List the pod created by the `nginx` deployment: List the pod created by the `nginx` deployment:
``` ```sh
kubectl get pods -l app=nginx kubectl get pods -l app=nginx
``` ```
> output > output
``` ```sh
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
nginx-554b9c67f9-vt5rn 1/1 Running 0 10s nginx-554b9c67f9-vt5rn 1/1 Running 0 10s
``` ```
@ -76,32 +76,32 @@ In this section you will verify the ability to access applications remotely usin
Retrieve the full name of the `nginx` pod: Retrieve the full name of the `nginx` pod:
``` ```sh
POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}") POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
``` ```
Forward port `8080` on your local machine to port `80` of the `nginx` pod: Forward port `8080` on your local machine to port `80` of the `nginx` pod:
``` ```sh
kubectl port-forward $POD_NAME 8080:80 kubectl port-forward $POD_NAME 8080:80
``` ```
> output > output
``` ```sh
Forwarding from 127.0.0.1:8080 -> 80 Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80 Forwarding from [::1]:8080 -> 80
``` ```
In a new terminal make an HTTP request using the forwarding address: In a new terminal make an HTTP request using the forwarding address:
``` ```sh
curl --head http://127.0.0.1:8080 curl --head http://127.0.0.1:8080
``` ```
> output > output
``` ```sh
HTTP/1.1 200 OK HTTP/1.1 200 OK
Server: nginx/1.17.3 Server: nginx/1.17.3
Date: Sat, 14 Sep 2019 21:10:11 GMT Date: Sat, 14 Sep 2019 21:10:11 GMT
@ -115,7 +115,7 @@ Accept-Ranges: bytes
Switch back to the previous terminal and stop the port forwarding to the `nginx` pod: Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
``` ```sh
Forwarding from 127.0.0.1:8080 -> 80 Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80 Forwarding from [::1]:8080 -> 80
Handling connection for 8080 Handling connection for 8080
@ -128,13 +128,13 @@ In this section you will verify the ability to [retrieve container logs](https:/
Print the `nginx` pod logs: Print the `nginx` pod logs:
``` ```sh
kubectl logs $POD_NAME kubectl logs $POD_NAME
``` ```
> output > output
``` ```sh
127.0.0.1 - - [14/Sep/2019:21:10:11 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.52.1" "-" 127.0.0.1 - - [14/Sep/2019:21:10:11 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.52.1" "-"
``` ```
@ -144,13 +144,13 @@ In this section you will verify the ability to [execute commands in a container]
Print the nginx version by executing the `nginx -v` command in the `nginx` container: Print the nginx version by executing the `nginx -v` command in the `nginx` container:
``` ```sh
kubectl exec -ti $POD_NAME -- nginx -v kubectl exec -ti $POD_NAME -- nginx -v
``` ```
> output > output
``` ```sh
nginx version: nginx/1.17.3 nginx version: nginx/1.17.3
``` ```
@ -160,7 +160,7 @@ In this section you will verify the ability to expose applications using a [Serv
Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service: Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
``` ```sh
kubectl expose deployment nginx --port 80 --type NodePort kubectl expose deployment nginx --port 80 --type NodePort
``` ```
@ -168,14 +168,14 @@ kubectl expose deployment nginx --port 80 --type NodePort
Retrieve the node port assigned to the `nginx` service: Retrieve the node port assigned to the `nginx` service:
``` ```sh
NODE_PORT=$(kubectl get svc nginx \ NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports[0]}{.nodePort}') --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
``` ```
Create a firewall rule that allows remote access to the `nginx` node port: Create a firewall rule that allows remote access to the `nginx` node port:
``` ```sh
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \ --allow=tcp:${NODE_PORT} \
--network kubernetes-the-hard-way --network kubernetes-the-hard-way
@ -183,20 +183,20 @@ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service
Retrieve the external IP address of a worker instance: Retrieve the external IP address of a worker instance:
``` ```sh
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \ EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)') --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
``` ```
Make an HTTP request using the external IP address and the `nginx` node port: Make an HTTP request using the external IP address and the `nginx` node port:
``` ```sh
curl -I http://${EXTERNAL_IP}:${NODE_PORT} curl -I http://${EXTERNAL_IP}:${NODE_PORT}
``` ```
> output > output
``` ```sh
HTTP/1.1 200 OK HTTP/1.1 200 OK
Server: nginx/1.17.3 Server: nginx/1.17.3
Date: Sat, 14 Sep 2019 21:12:35 GMT Date: Sat, 14 Sep 2019 21:12:35 GMT

View File

@ -6,7 +6,7 @@ In this lab you will delete the compute resources created during this tutorial.
Delete the controller and worker compute instances: Delete the controller and worker compute instances:
``` ```sh
gcloud -q compute instances delete \ gcloud -q compute instances delete \
controller-0 controller-1 controller-2 \ controller-0 controller-1 controller-2 \
worker-0 worker-1 worker-2 \ worker-0 worker-1 worker-2 \
@ -17,8 +17,7 @@ gcloud -q compute instances delete \
Delete the external load balancer network resources: Delete the external load balancer network resources:
``` ```sh
{
gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \ gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
--region $(gcloud config get-value compute/region) --region $(gcloud config get-value compute/region)
@ -27,12 +26,11 @@ Delete the external load balancer network resources:
gcloud -q compute http-health-checks delete kubernetes gcloud -q compute http-health-checks delete kubernetes
gcloud -q compute addresses delete kubernetes-the-hard-way gcloud -q compute addresses delete kubernetes-the-hard-way
}
``` ```
Delete the `kubernetes-the-hard-way` firewall rules: Delete the `kubernetes-the-hard-way` firewall rules:
``` ```sh
gcloud -q compute firewall-rules delete \ gcloud -q compute firewall-rules delete \
kubernetes-the-hard-way-allow-nginx-service \ kubernetes-the-hard-way-allow-nginx-service \
kubernetes-the-hard-way-allow-internal \ kubernetes-the-hard-way-allow-internal \
@ -42,8 +40,7 @@ gcloud -q compute firewall-rules delete \
Delete the `kubernetes-the-hard-way` network VPC: Delete the `kubernetes-the-hard-way` network VPC:
``` ```sh
{
gcloud -q compute routes delete \ gcloud -q compute routes delete \
kubernetes-route-10-200-0-0-24 \ kubernetes-route-10-200-0-0-24 \
kubernetes-route-10-200-1-0-24 \ kubernetes-route-10-200-1-0-24 \
@ -52,5 +49,4 @@ Delete the `kubernetes-the-hard-way` network VPC:
gcloud -q compute networks subnets delete kubernetes gcloud -q compute networks subnets delete kubernetes
gcloud -q compute networks delete kubernetes-the-hard-way gcloud -q compute networks delete kubernetes-the-hard-way
}
``` ```